HIGH-LEVERAGE PRACTICES IN General Education and Resource CLASSROOMS

Beyond the Obvious: Looking at students’ strengths and needs from all angles

Episode Description

In this episode, Heather and school psychologist Chris Zielinski dig into what it really means to build a complete picture of a student’s strengths and needs. Chris explains why relying on a single assessment—or even a single perspective—can lead to decisions that miss the mark. Instead, he makes the case for using multiple data sources and a team-based approach to create more accurate, fair, and useful profiles of students.

They talk through common traps like plug-and-chug evaluation practices, the risks of over-testing, and what it looks like when professionals truly collaborate across domains to answer real questions about students, not just fill out forms. Chris shares practical ways to keep evaluations efficient, student-centered, and culturally responsive—so teams can focus on what matters most: making smart, informed decisions that actually help kids.

Key Points and Takeaways

  • Special educators should compile comprehensive learner profiles using diverse assessment measures and stakeholder inputs to accurately describe student strengths and needs.
  • Relying on multiple sources of data ensures stronger efficacy in educational decision-making, reducing the risk of misidentification based on single assessments.
  • Collaborative evaluation involving teachers, psychologists, parents, and other stakeholders is crucial for comprehensive understanding and effective intervention planning.
  • Selecting efficient and effective assessment tools that offer high predictive validity can streamline the evaluation process and enhance overall outcomes.
  • Ongoing, collaborative data collection and analysis are essential for adapting instructional strategies to meet evolving student needs and ensure their academic progress.
Podcast Guest

Christopher Zielinski, SSP, BCBA

Chris Zielinski is a school psychologist, behavior analyst, and school administrator specializing in public policy, special education, and program assessment and development. Throughout his career in public education, he has been a long-term substitute teacher, school psychologist, lead psychologist, behavior analyst, autism/behavior consultant, and assistant superintendent. Before transitioning to the field of education, Chris provided clinical behavioral health services and worked in corrections with state and federal inmates. Outside of his professional life, Chris enjoys spending time with his three amazing daughters and his motivated, intelligent, and supportive wife. Chris is a Board Certified Behavior Analyst with his Bachelor of Arts in Public Law and Criminal Justice, Bachelor of Science in Psychology, Specialist degree in School Psychology, and a Director of Special Education endorsement.
Looking for CEUs? Click "01 | Listen" below!

Project Thrive

Build an inclusive, proactive classroom that supports students with behavioral and mental health needs.
Join the next cohort to develop effective environments, behavior strategies, targeted instruction, essential collaboration skills, and more!
High-Leverage Practice #4:
Use multiple sources of information to develop a comprehensive understanding of a student’s strengths and needs.
To develop a deep understanding of a student’s learning needs, special educators compile a comprehensive learner profile through the use of a variety of assessment measures and other sources (e.g., information from parents, general educators, other stakeholders) that are sensitive to language and culture, to (a) analyze and describe students’ strengths and needs and (b) analyze the school-based learning environments to determine potential supports and barriers to students’ academic progress. Teachers should collect, aggregate, and interpret data from multiple sources (e.g., informal and formal observations, work samples, curriculum-based measures, functional behavior assessment [FBA], school files, analysis of curriculum, information from families, and other data sources). This information is used to create an individualized profile of the student’s strengths and needs.
Empty space, drag to resize
Knowledge is only useful if you can have utility and use it effectively for change. Otherwise, it's just knowing.

CHRIS ZIELINSKI

Empty space, drag to resize


Host: Heather Volchko

Guest: Chris Zielinski

For this high-leverage practice, we are talking about using multiple sources of information to develop a comprehensive understanding of a student's strengths and needs. This week and next week, I have got one of our school psychologists, Chris Zielinski, with me, and I'm really excited to see how he breaks these two practices down. So, Chris, for you in your practice, how does using multiple sources show up?

No, that's a great question. Thanks for having me, by the way. Yeah. So multiple sources, it comes most of the time when you're trying to make any kind of decision for outcomes for a student, whether it's a decision based for determining whether or not a student is suitable for intervention services, whether it's looking at evaluation outcomes, or even as far as determining eligibility for special education. All of these pieces should, I will say “should”, in best practices, be things that are done using a variety of sources for your database decision-making process.

So, when you're saying multiple sources, what would count as a good use of multiple sources? How do I know that I actually have enough or the right ones, or what do I need to do to make sure I check all those boxes?

Yeah. So one of the things that you always want to consider there's a phrase that you'll hear me say this week, and you'll also hear me say it again later next week. It's this preponderance of evidence, right? There is no magic number out there that says, Oh, seven is better than four. And it really comes down to the measures. It comes down to the tools and what you're looking to accomplish with your assessment or evaluation.

So there is no magic number. I can tell you that one is not the magic number, for sure, but multiple things help you determine stronger efficacy in your outcome decisions that you're making. And so, do you mind if I kind of dive a little bit into… 

Yeah, please. Let's go.

…Breaking this up a little bit? Okay. Many of the people that would be listening to this, I'm assuming they're in education or they're a parent, right?

That is involved in their child's education. And I'm sure many of them have heard of a lot of the high-stakes things we give in schools. So, like MAP testing, right? Measures of Academic Progress, right? So MAP is a good tool. It's a benchmarking tool, right? It does have some diagnostic utility, but it's an example of one piece of data that looks at math, it looks at reading, and it gives you, at one point in one time of where that child was able to display their skill sets as they were presented to that child, right?

And as we go through that process, we'll use the area of reading, right? We know that the construct of reading is very broad, and it would be very hard to make diagnostic decisions, really important decisions, based off of one single measure that's taken on one single day that could be a good day, a bad day, in the morning, in the afternoon. It could come for a variety of reasons. But what we want to do is we want to develop our preponderance of evidence. We might have a tool that says, Hey, this might take some additional consideration.

Something is flagged. Maybe we're not presenting in that day or on that measure exactly what we would be anticipated to do given our age or our grade. The thing is, you really can't use that one tool to make a very strong determination. Strength comes in numbers. right? And what I mean by that is there's safety in those numbers. It helps you take out those confounding variables. So you might have a challenging case where you have one isolated score that right away drops a flag. There's a concern.

This tool that we use three times a year says that this student is underperforming given where we would anticipate them to do. Now, if you made your decision based off of that one tool, it could be right, it could be wrong. And it might be a misappropriation of time and resources for that child. However, if you have other tools, other measures where you've breaking up across this strata of reading, all the individual areas of reading, you might come to find out, well, I have this one tool that says this student was struggling, but all of these other indicators which are also valid, maybe I have three or four, show a higher preponderance of evidence that this really isn't a concern.

It could have been, simply put, just a bad day for that kid. They could have been hungry, they could have been sick, but they had to show up because we know benchmarking is important. But if we wouldn't have gone through our due diligence of grabbing multiple individual snippets for that student at that time, we probably would have made a decision that could have poor ramifications for the child, how they feel about themselves, how they feel about themselves, and the skill comparative to their peers, and how teachers may present instructional material to that student. So having those multiple measures helps take some of the guesswork out of it, and it also allows us to build that preponderance of evidence to never conclude, but have a very strong indication.

I hope that helps.

Yeah, it's making sense. And you and I were having a conversation before we hit record as well. And one of the things that I appreciated that you were saying is that each individual practitioner isn't necessarily on the hook for running all the measures in their entire field to prove all their things, that it truly does take a team approach. So can you take a little bit of time to really dive into on that evaluation side where we're making sure that we have all the right measures and we're collecting all the right information, basically utilizing anything at our disposal, how that is maybe just bigger than me as a practitioner, but it needs to fit together comprehensively or at least cohesively.

Yeah, that's a really good question. We know that oftentimes, let's take it for the case. As we talk through this, we're going to talk the case of, let's say a child's being evaluated for determination of whether or not they would qualify for special education services. If we keep that as the lens, as we have this conversation. So you might have an individual, and we know that through that evaluation process, there's all these other domain areas that we might look to evaluate, right? So you've got your academic achievements, you get your functional performance, your cognitive, so on and so forth. Right? You have meteoric skills. You look at a variety of different areas to see holistically, where is this child right now?

Right? And as you make these conclusions and across these areas, think about it this way: each of those isolated areas that we spoke about, academic achievement, cognitive functioning, all of those should be in best practices. They should be areas that they themselves have multiple measures to come to the conclusion of the outcome in that specific strata or domain area. Then, even further (and we'll get into this next week, too, as we talk about interpreting and communicating and collaborating with those), each of those are going to border on each other somehow.

They're going to. And so the example that I might use is processing speed, and it might be fine motor skills. Right? We know that processing speed is a cognitive evaluation, but we also know that there's a very, very, very strong correlation between that and dexterous hands. If you have very good fine motor skills, it's easier for you to cross things out and cancel them and circle them and do those things.

Now, on one hand, we might have a source that says, hey, these are what our cognitive faculties are. On the other hand, you might have an O.T. that has a lot of areas that they've looked specifically at: fine motor skills. And now you have one measure that doesn't, on its face value, seem like it relates to another, and you've been able to bring it over and the psychologist would say, hey, I also can conclude during my evaluation another piece of data that would go along with exactly what you're saying, that I witnessed another measure.

Right? Another indicator that during processing speed, this was a challenge. Motorically driven tasks were very challenging. And so that allows you to integrate in that evaluation of each of those domains, multiple measures in each of the domains to come to a decision and then integrating those domains. And we'll talk deeper into that next week. But that's the most important thing. You have to know your evaluations. You have to know what they measure, how you administer them, how long they take, but also, what is it going to tell us? What does it support and what conclusions can we make?

I've always said, just like us, if we have something that we're really concerned about with the physician, we might go get a second opinion. Well, having multiple measures allows you a second, a third, a fourth opinion. And they don't have to be broad evaluation tools. You might look at something that gets flagged in reading and check out your reading fluency, your reading comprehension, your phonics, your phonemic awareness, and you can do that in each of those areas. Multiple measures gives you a better outcome.

I love that you're talking through typically how your special education determination is coming through multiple domains. And I do see a lot of times school teams kind of this department owns this area of the domain and they own this domain and then another department owns another domain, when in reality there are a lot of overlapping aspects of the different fields that are present at the table.

So how would you recommend saying, “as a practitioner, I want to bring multiple sources to the table, but I am tasked to kind of stay in this lane. So how do I make sure that I'm doing what I need to do, but I'm also doing right by the evaluation?” What are your tips and tricks to be able to work collaboratively, really, in designing the evaluation, not just kind of where it results, which is our conversation next week.

Great question. I've always said the planning you do on the front end of your evaluation will be directly influencing the value of your iep and support on the back end of it. So you said integration, right? I've seen as a psychologist, as a consultant in a lot of roles that I've played in schools, I've seen it to where everybody will come to the table, just like you said, siloed. “Hey, I'm this person and I'm going to evaluate this one.” “I'm a psychologist and I will hit the cognitive and the academic achievement and the speech will hit this.” And the reality of it is, I've always said there are two different mindsets. When you're looking to evaluate, you're looking to complete what you need to do on the domain, right? That's the minimus model. “Well, it says academic achievement. We don't have a lot, so I should probably do something in this.” And you grab something that you think is going to be the biggest bushel of fruit to give you the most information in that area.

You fill the domain up and you do it. Plug and chug. Right? And I know anybody in here that's evaluated a kid, you know what I'm talking about. You've probably been there, done that. Right? Or even a teacher, you've heard the same person talk about the same evaluation tool for a thousand kids over and over. Right? The plug and chug. That's one way to do it. Is it legally defensible? Yes. Does it hit the outcome objective of assessing a child minimusly? Yes. I will say at minimus, yes, You could make a case for that.

Is it the best? No, it is not best practices, and I would never support that. What I always look at is there's a difference between filling the domain and answering questions. And I think that onset process of asking questions, what do we know about this child? What do we need to know? And that comes from the teachers, it comes from the interventionists, it comes from the parents, it comes from the clinicians, the assessors, the evaluators, the team members, all of them.

I'll use Chris as an example. Obviously, it's my name, so I'll use it. It keeps it clear, coherent. I have a concern as a teacher, Chris struggles with reading. Okay, what do we have in reading for this student? Let's list out some of the sources we have. We might sit back at that point in time and say, wow, we actually have a lot more than we thought we did. The teacher had some, the psychologist had some, the interventionist had some. We had some benchmark data.

We've got all these other things. Then the second step is asking, did it answer the question that we really had? So we have a concern in reading. Well, tell me more about the concern. And as I would be hearing that as a clinician or an evaluator, I would listen to the parents, I would listen to the teacher and say, okay, really, they're struggling. It sounds like their concerns fall in the area of reading comprehension. So now, what do we have in that area?

And if we don't have much, then we might look to the table of the team and say, who can help answer this question for fill in the blank for mom, for the teacher, for the evaluator, for all of us. Because we don't have anything on it. Right? And then at that point, it changes from, that's my sole job, to answer that question as the role player of X in domain Y to how can we collectively do this evaluation to answer the questions that are the most important for this kid?

And I think that that is one of the best ways to go at it, because if you just fill out the domain, sure, you can defend it, absolutely. But you're gonna lose the richness, the integrated value of multiple measures that you may or may not even have the control to. And it's a good way to go at it, I've always said, and it's one of the best approaches I've seen. I didn't invent it for sure. I was exposed to it  15 years ago where they said, we don't call it a domains meeting. We call it going to questions.

And I was like, oh, this is going to be saucy. Let's see what happens. And after I watched it the first time, I was a little confused, like, wow, this isn't what we did before. And then after I saw it lay out and I heard the parents saying, thank you for answering that question, I always worried about this. The teacher said, you know, I always thought X, Y and Z, but I'm glad that you said this and you were able to help answer.

And everybody looked to their own level of expertise to answer those questions and come to the table with multiple sources combined to make a better decision. And so thinking of that framework from tip to tail, from soup to nuts, how can we integrate effective, efficient, multiple sources of data? And so we talked a little bit about that process. What does it mean to take multiple sources? How do we utilize that?

Do you mind if I take a couple minutes to talk a little bit about the efficacy and efficiency of that?

Yeah, go for it. Because especially once you start getting lots of hands all in the mix, then sometimes the efficiency gets. It's not, it's just not efficient to anybody.

Absolutely right. You asked before, well, what's a magic number? How many do we need to have? What's better? What's worse? Right? I can tell you this, having one can be just as bad as having 50.

Because you start losing efficacy and efficiency, and over time, you will find that some measures do a better job with predictive validity than other measures do. That comes with experience, and it's not experience what's in the books. They'll tell you how g-loaded a task is, or it'll tell you correlations to other assessment tools and measures, and they'll give you all these fancy datas and statistics, but you're going to figure out what works with the population you're working with.

And you have to really think of is this evaluation tool culturally and linguistically effective, efficient and ethical for the population, for the individual student that I am evaluating. Right? Student centered: going back to the student, going back to the individual thing. And I will tell you this. I'm not a betting man, but if I were to be a betting man, I would bet on the team that had three or four really strong, efficient, effective tools that have a high level of predictive validity and outcomes of kids than the person who can narrow it down to the microcosm through 50 evaluations, because at the end of the day, you are running into diminished returns.

How much can you polish that stone? You've gotten to what you need. The most important thing that comes out of it is not admiring it, but planning around it and supporting that child. So you can have not enough, you can have too much. But you got to find that balance, and you have to be critical. Really take a conscious, critical look at the tools, break them down. I'll tell you a brief story. Really fun.

I worked one time for a while consulting with MTSs: RTI. It was back in the day, RTI. It was post flexible service delivery model in Illinois. We had these intervention tools, and everybody wanted this easy, quick, simple thing. And what we found in one of these diagnostic tools… So we'd have a benchmark that would flag a kid, and then there would be a diagnostic follow up we would do. It took about five to ten minutes to do, and it had very strong predictive validity, especially in the area of reading. In that five to ten minutes, it would give us as strong, just about as strong of an outcome perspective of that child as the MAP would at that moment. But the MAP took significantly longer. And I remember having making this case of, like, why are we not utilizing this tool? Like, it's so efficient, so effective, and with this population, it works.

And we used, I think, four years of continuous data roving. Where we had four years, we dropped off the fifth, or we added the fifth, dropped off the first, and then had two through five. We did it in these rotatings, this sweeping array, to show that this is really strong. And what ended up happening is that district ended up using that as their diagnostic utility tool. When a benchmark said, hey, this kid was flagged, they had this big pool, and they were like, now we need to determine whether or not this child really needs intervention. And what areas of intervention should we try and align for the kid?

You didn't get it from MAP. You got it from something that took way faster. And I remember the teacher saying, wow, MAP takes so much time when we teach the test, why aren't we just doing this? I was like, look, there's a process. I don't know what to tell you, but you don't want to put all your eggs in one basket, right?  That was a perfect example of sometimes efficiency counts. Sometimes effective measures aren't always the most pricey, the most polished, the best looking one.

But when you have the clearer picture of all of those measures laid out in front of you, only then can you make that determination for the child, for the system, for the grade, for the team, for instruction. And you can make all of these great things, efficiency and effectiveness, count. So more isn't better, but really take stock over time and see which ones really hit the nail on the head more and start taking out those things that really aren't that effective, working. They don't align to the population, they don't have a lot of predictive validity. And the only way you do it is experience, time, and utilizing it, being familiar.

Well, thank you so much for sharing your expertise on selecting the tools, making sure that it truly is representative, answering the question, as opposed to just filling domains and being able to do that collaboratively in a really meaningful way, not just for the team or those at the table, but also for the student, who then has to walk through all of these tools as well. So, thank you so much for kind of laying the foundation and then I'll talk to you again next week where we talk about what does it look like to put all of those pieces together and truly make that make sense for the kid.

Thank you.

Empty space, drag to resize
That student probably won't be at that table if there wasn't some kind of a concern with the deficit already. You're just answering the question you already questioned and kind of had an answer to when you first started the entire process... We all can benefit a parent or a student or a teacher or a team or another clinician or another person by telling them the richer information.

CHRIS ZIELINSKI

Empty space, drag to resize

Imagine stepping into the world of a student with disabilities, where each individual carries a unique blend of strengths and challenges. Their needs span across academics, social skills, emotional regulation, and communication, shaped by underlying issues such as attention deficits, memory challenges, and emotional struggles. These factors can significantly impact their ability to succeed, and understanding them is crucial for crafting instruction that truly meets their needs.

Research highlights that students struggling with specific language and cognitive aspects—like phonological awareness and rapid letter naming—often require intensive, ongoing support. The effectiveness of instruction in reading and math is a strong predictor of future success, and it's essential to consider how environmental factors like culture, language, and family poverty influence learning and behavior. A well-organized instructional environment that supports students' needs can greatly enhance their learning and behavior.


Given this complexity, special education teachers must develop comprehensive learner profiles. These profiles are not static; they evolve with ongoing instructional and behavioral data, offering a nuanced view of each student's strengths, needs, cultural influences, and responses to instruction. Creating these profiles involves gathering information from a variety of sources: detailed assessments, family discussions, curriculum-based measurements, student surveys, and direct classroom observations.


By analyzing patterns in this data, teachers can build a robust understanding of each student's academic and non-academic needs. This comprehensive view helps in designing, implementing, and refining instruction, and fosters a collaborative approach with professionals and parents. Ultimately, the goal is to use this information continuously to support and guide students towards their full potential.


When we talk about assessment, it’s essential to view it as an ongoing process rather than a one-time event. This approach is vital for professionals working together and engaging with families of students with disabilities. They need access to a wide array of information to truly grasp each student's unique needs and develop a tailored support plan.


No single assessment—regardless of its type—can give us the complete picture of a student's strengths and needs. That’s why collaborative efforts among professionals and families are so crucial. By integrating insights from multiple sources, teams can work together more effectively to understand students fully and create individualized plans that genuinely address their needs. This ongoing, collaborative process ensures that the support provided is both comprehensive and responsive to each student's evolving needs.


Effective teachers excel in building a deep understanding of their students by integrating various sources of information. They use both formal and informal assessments to uncover academic strengths and areas for improvement. Beyond standardized tests, they actively seek students' own perspectives—asking about their preferences, strengths, needs, and long-term goals. They also engage with family members to gain valuable insights into the students' interests, motivations, health, language, and cultural backgrounds, both within the school environment and at home.


By weaving together data from these diverse sources—whether it's assessments, student feedback, or family input—teachers create a rich, comprehensive learner profile. This holistic understanding not only shapes instructional strategies but also guides critical decisions related to Individualized Education Programs (IEPs), ensuring that the support provided is both personalized and effective.

For school leaders aiming to support teachers effectively, it's crucial to focus on several key areas. First, ensure that every educator is well-informed about the assessments available to them, making sure these tools are appropriate and minimally biased for the students' age and cognitive levels. Equally important is providing clear instruction on how to interpret data from various sources. Teachers need to understand how these different types of data come together to create a complete picture of each student's strengths, needs, and current performance.


In addition, offering constructive feedback on how teachers administer and interpret these assessments can greatly enhance their skills. This ongoing support not only helps teachers refine their assessment practices but also ensures they are equipped to make informed decisions that benefit their students.


Both general and special education teachers play crucial roles in understanding and assessing students' strengths, needs, and interests. However, special education teachers are uniquely positioned to craft detailed, comprehensive learner profiles for each student. This is because they typically have the most direct and frequent interactions with students who have disabilities, their families, and the various professionals involved in their assessment.


These teachers often work closely with students in small-group settings, providing intensive support that allows them to gain a deeper understanding of each student's unique needs. This close interaction not only helps them gather richer insights but also allows them to tailor instruction more precisely.


To create these comprehensive profiles, effective special education teachers must be well-versed in various assessment tools and understand how to leverage the data they provide. This knowledge enables them to collaborate with the educational team to design, implement, evaluate, and continually refine instructional programs. The ultimate goal is to ensure that students with disabilities receive the support they need to fully engage with and benefit from the general education curriculum.

Empty space, drag to resize