Free Webinar on Moving your Classes to a Remote Format

Brandman University and Dr. Kimberly Greene are offering a free webinar for educators transitioning to remote instruction during the coronavirus crisis.

The Webinar will be held on Thursday, March 19 at 4:00PM Pacific Time.

More information can be found here: https://conta.cc/2Q8dzhB

 

 

Resources for Remote Teaching

File:Child and Computer.jpg“File:Child and Computer.jpg” by r. nial bradshaw is licensed under CC BY 2.0

First Posted 3/13/2020 3:30pm
(Last Updated 3/18/2020 11:57pm)

Many of us are taking the extraordinary step of shifting our classes to a remote instruction format to allow students to learn from home and lower the risk of transmitting the Novel Coronavirus.  Please feel free to submit resources that you are aware of using the comment link below.

Instructional Resources

Teaching Effectively During Times of Disruption (Jenae Cohn & Beth Seltzer, Stanford University)

Teaching Online During COVID-19 (Vanessa Dennen)

Remote Teaching Resources for Business Continuity (Daniel Stanford, DePaul University) – this list is becoming very comprehensive so please forgive duplication.

Keep Teaching (Indiana University)

Teach Remotely – COVID19 Response (American River College)

Tips for Teaching Online (Mike Brudzinski, Miami University)

K-12 Instructional Resources

Resources for Teaching Online due to School Closures (Kathleen Morris)

Great List of EdTech companies offering free resources for educators to support Remote learning (The Amazing Educational Resources Facebook Group)

Another Great list from KidsActivities.com

Student/Family Online Resources (Boston Renaissance Charter)

Remote Instruction Products & Providers

TechSmith is offering organizations free access to Snagit and VideoReview for remote teaching. (NOTE: I highly recommend Snagit as an indispensable tool for online and remote teaching)

Zoom is offering expanded free access to K-12 Educators. A Forbes article covering this can be found here.

Adobe Offers Free Creative Cloud Packages for Students Stuck at Home Due to Coronavirus

Tips on Remote Teaching from the UC Davis University Writing Program
(uwp.ucdavis.edu)

Calling it what it is: Remote vs. online learning in the time of Coronavirus

Kallie Hagel / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)

On March 10, Jonathan Zimmerman, an education and history professor at the University of Pennsylvania, argued in the Chronicle Review that the coronavirus crisis provides an opportunity to generate evidence about the efficacy of online instruction.  Dr. Zimmerman asserts that we owe it to our students to find out whether online truly provides a viable educational environment–and this is true. However, we should not treat this crisis as an opportunity to evaluate online education because we are not turning the face-to-face courses offered across the country into true online courses–we are doing something very different.  Indeed, we should be hesitant to embrace any argument that suggests a traditional course that has been shoehorned into remote delivery within a matter of weeks (even days) is equivalent to an online course that has been carefully and intentionally designed to take advantage of the technology and unique opportunities for student engagement and collaboration that online learning systems provide.

 Broadly speaking, the term online course is generally used to describe any course that is delivered through the internet. Functionally, the delivery can take on many forms and engage students in a wide variety of ways. However, as online education has grown and evolved over the last decade, the pedagogical and technological practices that go into online courses have developed into something more than just internet-based distance education.   Seasoned online instructors have long known that creating an online course entails more than just putting a few PowerPoint slides, readings, or videos into a learning management system such as Canvas or Blackboard. A decade ago in the Handbook of Online Learning, Rudestam and Schoenholtz-Read (2010) pointed out that “although the transfer of classroom-based learning into cyberspace at first appeared to be deceptively simple, we have discovered that doing so without an appreciation for the nuances and implications of learning online ignores not only its potential but also the inevitable realities of entering it.”  Research and conventional wisdom developed over years of online teaching and learning tells us a lot about these nuances and how they can contribute – and detract from – the learning environment of a good online course.  A good online course uses the communication and collaboration tools made possible through technology to approach learning in a specific, intentional way.  Content is organized, interactive, and engaging and student mastery is appropriately and effectively assessed. Equitable design is essential in a good online course, ensuring that students of all learning abilities, including those with disabilities or those using assistive technologies, can access and engage with course content.  A good online course builds a learning community through carefully selected content, a carefully maintained social climate, and supportive discourse that empowers learners.  

This is a high bar that we simply cannot expect all faculty to reach when responding to an urgent, crisis-driven need to find a rapid alternative to on-campus classes.  Online courses are intentionally planned for online delivery and are designed with the nuances and realities of that delivery method firmly in mind. Universities and colleges across the country are asking faculty to do something extraordinary, but different.  Faculty are being asked to find a way to teach remotely that works for them, in their specific situation, and with little time to prepare. Likely this will result in courses that replace in-person lectures and discussions with remote alternatives like video conferencing without changing other aspects of the course like activities and assessments–and that is a practical approach.  These courses will be remote versions of face-to-face courses rather than courses that are designed intentionally for online with all that entails. As a temporary solution to an immediate crisis, that should be fine. The conversation would be different if the concern is that we will never be returning to our classrooms.

 Internally, some institutions have been careful about throwing the term online around indiscriminately and instead are opting for more descriptive terms like remote delivery or distance education to distinguish intentionally designed online classes from those forced into remote delivery by the coronavirus crisis.  DePaul University’s curated list of resources for Universities is careful to use the term Remote Teaching–as are quite a few of the universities on the list.  While online is still a term that many are more familiar with, and one that has long been popular with the media,  the educational community needs to be more careful in our communication. Let’s make sure faculty know what is expected and students know what to expect, but let’s not give the false impression we are giving students purposefully designed online courses when we are not. 

Those faculty who, like me, teach a large portion of our classes online, are well positioned to make the most of the online tools and instructional approaches that can support remote learning until we return to our physical classrooms.  Many of us already teach our on-campus classes with high reliance on Canvas, Blackboard, or some other learning system for student engagement and course organization, but that reflects our vision of education.  For those faculty colleagues who are anxious to return to lectures and discussions and the type of in-person interactions that take place there, the temporary shift to remote delivery must be a reflection of their pedagogical approach and their vision of education.  We should not be asking a faculty member who is not comfortable with asynchronous discussions to try to incorporate one–we should be making use of the technology resources that we have to more closely mimic the type pedagogy they are comfortable with.  Current conferencing platforms are perfectly capable of delivering a 200-person lecture, just as they are of creating small group discussions. True, this doesn’t match an in-person environment exactly, but it gets close. We should be helping faculty see this as an extension of their current approach to teaching, not as a forced march to online.  

We should be careful to recognize the response to the coronavirus crisis for what it is–an attempt to maintain instructional continuity.  Remote instruction may not be the right strategy for many instructors in the long term, but it will help get the immediate job done. Right now that’s all we can ask.   This widespread cancellation of on-campus classes and subsequent shift to remote instruction does create the type of natural experiment that Prof. Zimmerman describes, but it is a bad natural experiment at best.  A good natural experiment imposes a standard treatment that allows a researcher to evaluate the impact of that treatment before and after a specific cutpoint. The instructional changes resulting from this crisis will be inconsistent at best, with no standard online learning treatment. So, let’s not assume the crisis response represents the best of what online education has to offer, or even typical online education, just because we have an unprecedented natural experiment for rigorous evaluation–we may not be measuring what we think we are.  With that said, there are numerous qualitative and phenomenological approaches that can provide us with rich insight into how students engage and learn in a variety of unique settings that don’t need an experimental or quasi-experimental design. We will learn a lot over the coming months–but we will not definitively answer the efficacy question.

March 13, 2020

Helpful Stata Links

Style Guides & Coding

In Stata coding, Style is the Essential: A brief commentary on do-file style
Embrace Your Fallibility: Thoughts on Code Integrity
Intro to Data Analysis & Visualization: Stata Cheat Sheets

Official Stata Corp Links

Stata: Data Analysis and Statistical Software
Stata Press
Stata Training
Stata Support
Stata List (Forum)
The Stata Journal

Listserv and Resources

Stata List (Forum)
UCLA Institute for Digital Research and Education
Esttab: Making Regression Tables in Stata

In Stata coding, Style is the Essential: A brief commentary on do-file style

“In all unimportant matters, style, not sincerity is the essential.
In all important matters, style, not sincerity is the essential.”
– Oscar Wilde, Phrases and Philosophies for the Young

Wilde, well known in his time for decadent style and flamboyant behavior, could have never imagined that his rather abruptly put witticism, extolling the virtue of style, can also aptly be applied to writing code in Stata.  Stata code, like computer code of any kind, has two purposes.  The first purpose, which Wilde may have seen as “sincerity” is to communicate a series of commands to the computer.  We are explaining to Stata, step by step, what we wish it to do.  Stata doesn’t care much about style, its only concern is syntax.  As long as the syntax is correct, the program will run.  Style be damned.  However, the second purpose of computer code is to provide a record of what instructions were given.  The code itself stands as a documentation of the programmer’s mind – how the problem addressed by the code, the approach taken, and the end result.  This is where style becomes essential.  As with any written document, it exists to be read, and the readability of Stata code by the code’s creator, a colleague, or a mere spectator, depends upon the style in which the code is written.

Lester McCann (1997), a senior lecturer at the University of Arizona, outlines the importance of programming style in Toward Developing Good Programming Style (written for C and for Pascal for those of us who are that old).  He explains:

“A program that is perfectly clear today is clear only because you just wrote it. Put it away for a few months, and it will most like take you a while to figure out what it does and how it does it. If it takes you a while to figure it out, how long would it take someone else to figure it out?”

In the wake of the LaCour Scandal, the importance of documenting our data, our processes, and our analyses so that others can evaluate what we have done can hardly be understated.  Our code should be well thought out, well organized, and well documented.  As McCann (1997) says, it is important to strive to “structure and document your program the way you wish other programmers would” (that is my emphasis on the document).    McCann focuses on themes specifically relevant to general programming, but some of these themes are also woven throughout “Suggestions on Stata programming style” (Cox, 2005), appearing in the Stata Journal.  Cox’s, though primarily focused on Stata programs (.ado files) rather than procedural “.do” files, goes to great great lengths to support the idea that above all else, programs must be clear.

In Stata, at its most basic level, a do-file is simply a text file that contains a script of commands for Stata to execute.  I suspect a true “programmer” would scoff at the relative simplicity of a Stata do-file, but those of us in the various disciplines that rely heavily on Stata for data analysis know that a Stata do-file can become quite the complex beast, reflecting hours (if not days) of work.  Indeed, there are do-files that make me more proud than any paper I’ve written.  Generally, do-files should be Robust and Legible (Long, 2009).

Robust do-files produces exactly the same result when run at a later time or on another computer.
Legible do-files are documented and formatted so that it is easy to understand what is being done.
(Long, 2009)

The stylistic approach that I take to writing do-files can be broken down into two key categories: (1) process and (2) format and style.

1. Process

Processes in Stata have been documented extensively in manuals and books published by the Stata Press (See Long, 2009 and Kohler & Kreuter, 2012 for two very useful examples).  I tend to keep suggestions from both, as well as several other practices in mind as I work though my do-files:

a. Always create a log

The importance of a Log file cannot be over emphasized.  Log files can help you trace errors, preserve output, and provide documentation.  Every do-file should open a log file (preferred) or append to an existing log.  I also prefer to log as text, even though Stata’s default is SCML, it is generally quicker and easier to open a log file (particularly if it is on a machine without Stata or a SCML viewer — like a mobile device).  A log file can easily be created using the log using command:

log using logname.txt, replace text

b. Make do-files self-contained

Stata do-files should be self-contained.  In other words, each do-file should be able to run as a stand-alone program without relying on the user to first load a dataset or take other action.  The do-file should load its own data sets, execute its own commands, and initiate logging.  It shouldn’t evaluate estimates or rely on macros that it has not created itself (see my comments on global vs. local macros below).  I think it worth pointing out that there are always some exceptions to these guidelines, particularly this one.  For example, it can be a good idea to use separate do-files to prepare and analyze data.  Clearly the preparation would need to be completed before the analysis, however, once a final data file has been prepared, the analysis can be completed multiple times independently of re-running a preparation script.

c. Identify the Stata version the log was written for

Stata changes over time.  New versions introduce new commands, retire old commands, and handle syntax differently. This should be done at the beginning of the do file using a simple command:

version 13

Not identifying the version can create troubleshooting headaches later on.  Take for example the merge command in Stata 10 or prior:

isid id
sort id
merge id using source.dta

Earlier versions of Stata required merge data be sorted on the merge variable (in this example, “id”) , and assumed a one-to-one merge, both the master and the using dataset needed to be uniquely identified on the merge variable.  Thus, before merging it was necessary to check for uniqueness (isid), sort on the variable, and then perform the merge.  In Stata 11 and higher, the merge command became more powerful, allowing for merging one-to-many, many-to-one, and even many-to-many based on the merge variable(s).  The new merge command also handled the sorting of data internally, eliminating the need to pre-sort the master and using data sets.  The same process as above could be accomplished now by using:

isid id
merge 1:1 id using source.dta

The newer merge command is perfectly capable of understanding the old syntax, but a do-file written with the new syntax would not successfully run on an earlier version of Stata.  If the “version” command at the beginning of the do-file recognizes an older version of Stata, execution will halt.

d. Use relative paths

Commands that read and write files (i.e., data files, log files, output files, etc.) should use folder (directory, if you must) paths that are relative locations to the location of the do-file or the project home location.  As I’ll discuss in the next section, your do-file should identify a home location in the header, after that, everything else should be relative.  If the location of the data changes, the do-file will still execute without having to find and replace every path in the file.  For example, a do-file might open a dataset:

use "c:\Users\JohnDoe\Documents\Stata Data\My Project\work\analysis\mycooldata.dta", clear

That path is all well and good until you move “My Project” and all of it’s contents onto a flash drive or external hard drive or you migrate the project from a PC to a Mac or Linux platform.  To avoid future issues, use a relative path:

use analysis\mycooldata.dta, clear

This approach does, however, require that you use a uniform directory structure for your projects.  I create a separate folder for every project I work on (e.g., “My Project”) and that project folder contains consistent subfolders:

documentation/
drafts/
readings/
work/
work/raw
work/intermediate
work/analysis
work/output

Some of the subfolders listed above are self-explanatory.  The “documentation” folder contains project documentation, codebooks, memos, etc., the “drafts” folder contains drafts of papers or reports.  I have to admit that the “readings” folder has become a bit obsolete since I now use Mendeley Desktop to organize PDFs and references, but in a group project, the folder can be useful for sharing articles.  The key folder for Stata work is the “work” folder.  In this folder and its subfolders, I keep the “raw” data, “intermediate” temporary data files (i.e., temporary files).  Output contains tables, graphs, or other output generated by my do-file.

I also store all non-sensitive projects in a cloud-based folder.  I prefer Dropbox, but Box and GoogleDrive work equally well.  These services create a local folder on your computer that is then synchronized with the cloud.  This serves two really important purposes:  first, it makes sure that my critical files are always backed up.  Second, since I have the cloud services installed on both my desktop and my laptop, files are automatically updated between the two and the paths remain the same.  A do-file that I write on my desktop will run on my laptop with no modifications at all.

d. The pen is mightier than the sword: Use the right tools

There is little doubt (for me anyway) that Stata is a pretty awesome analytical tool.  It is very difficult, however, for any program to be all things to all people.   The do-file editor built in to Stata definitely has its benefits, particularly with the built-in project management in Stata 13 and higher, but there are also some advantages to using a third-party editor as well. It might be worth noting, and probably goes without saying, that you shouldn’t use Word or Pages to edit your do-files — word processors are most definitely not the right tool for the job because they are designed to handle formatting and layout issues that aren’t relevant to our Stata code.  On my Mac, I use Textastic, a great editor that supports Stata syntax highlighting (e.g., commands are highlighted and the code is visually very easy to follow), automatic balancing of brackets and quotes, and automatic indentation.  Most importantly for me, it automatically saves, whereas the Stata do-file editor does not.  Textastic also has an iOS editor and iCloud/Dropbox support which makes it convenient to edit and view do-files on the go.  For PC Users, Cox (2005) provides a web link with text editor recommendations: http://fmwww.bc.edu/repec/bocode/t/textEditors.html.  He cites Heiberger and Holland’s (2004) requirements for any text editor (pp. 633-4):

1. Permit easy modification of computing instructions and facilitate their resubmission for processing
2. Be able to operate on output as well as input
3. Be able to cut, paste, and rearrange text; to search documents for strings of text; to work with rectangular regions of text
4. Be aware of the language, for example, to illustrate the syntax with appropriate indentation and with color and font highlighting, to detect syntactic errors in input code, and to check the spelling of statistical keywords
5. Handle multiple files simultaneously
6. Interact cleanly with the presentation documents (reports and correspondence) based on the statistical results
7. Check spelling of words in natural languages
8. Permit placement of graphics within text
9. Permit placement of mathematical expressions in the text.

Points 6, 8, and 9, are less relevant for Stata do-files, but the remaining requirements hold.

2. Format and Style

a. Be consistent

Consistency is key to making sure your do-files are readable and easy for others (and your future self) to understand.  Develop consistent patterns and habits for all aspects of the format and style of your coding, including structures, foreach and forvalues statements, as well as local and global variables.

b. Comment and document

Comments are critical to making your do-file easy to understand and refer back to.  Stata allows three basic methods for typing comments:

* Comment
// Comment
/* Comment */

You can’t have too many comments–and you will certainly not regret the time spent on comments and internal documentation if you have to return to the file later.  However, comments are only useful if you can understand them later — so they should be understandable and accurate.

For the purposes of consistency, I use different comment notation in different ways.  I tend to use * comments for headers and dividers, to “comment-out” specific lines, and also to indicate notes to self that require follow-up.  I use // comments to indicate comments on do-file operations, to explain what the file is doing and why.  For example:

*--------------------------------------------------
* My Project
* samplefile.do
* 7/30/2014, version 1
* Michael S. Hill, University of California, Davis
*--------------------------------------------------

*--------------------------------------------------
* Program Setup
*--------------------------------------------------
version 13              // Set Version number for backward compatibility
set more off            // Disable partitioned output
clear all               // Start with a clean slate
set linesize 80         // Line size limit to make output more readable
macro drop _all         // clear all macros
capture log close       // Close existing log files
log using teachermobility.txt, text replace      // Open log file
* --------------------------------------------------

// Open data file created by createmydata.do
use analysis\mycooldata.dta, clear

// Summarize data
summarize

// Close the log, end the file
log close
exit

In the example above, I also show you the kind of header and initial commands that I run in every do file.  Not only does this identify the do-file and purpose, but it also identifies me, the author, and the date and version number.  If I come back to this do-file in the future, I’ll know what it was supposed to do.  As a side note, I would also introduce any global macros in the header.  After that, I would only use local macros.

c. Use spacing and indentation well

Stata ignores spaces and tabs.  So go wild with spacing – consistently, of course – and line up your commands in neat columns, tab to offset different parts of code (especially foreach and forvalues commands), and make your code pretty.  Yes, that’s right:  pretty.  To borrow an example from J. Scott Long (2009), what is easier to read:

rename k12_unique_id sid
rename class_unique_id class_id
rename teacher_name teacher
rename semester_1_grade grade1
rename semester_2_grade grade2
rename final_course_grade grade3
rename pass_nopass pass

or this:

rename k12_unique_id         sid
rename class_unique_id       class_id
rename teacher_name          teacher
rename semester_1_grade      grade1
rename semester_2_grade      grade2
rename final_course_grade    grade3
rename pass_nopass           pass

Perhaps more importantly, which would be easier to troubleshoot if names started showing up wrong?

Using /// is a visually appealing and helpful way to organize a complex command across multiple lines.  Stata will execute the following two commands the same way:

use analysis\mycooldata.dta, clear

use ///
analysis\mycooldata.dta, ///
clear

There is likely no practical reason to split a use command across three lines, but as an example, Stata will treat those three lines of code as if they were one rather than breaking the command at the end of each line.  It is also possible to include comments after each /// just like comments can be included after //.  The difference is that a // leaves the line break intact.

It can also be helpful to use indentation to visually indicate that a code breaks across multiple lines.  This is easier to visually follow:

keep   sid class_id teacher grade1 ///
       grade2 grade3 pass

than this:

keep sid class_id teacher grade1 ///
grade2 grade3 pass

I also recommend not putting any code on a line after a brace { or }, and making sure that braces line up for easier visual tracking.

foreach var of varlist * {
     sum `var'
     rename `var' data_`var'
}

I also tend to group similar functions together without a space between lines, whereas a new set of functions gets a space between lines.

rename k12_unique_id     sid
rename class_unique_id   class_id

label variable sid          "Student ID"
label variable class_id     "Course ID"

d. Don’t substitute brevity for readability

Stata usually allows you to abbreviate commands, parameters, and variable names to the fewest possible characters needed to uniquely identify the name.  This is particularly handy when entering commands directly to the console.  However, in do-files, too much brevity can make it difficult to decipher the command later, particularly for variables.  Make sure that if you use abbreviations for commands that you can identify the command from the abbreviation.  Take, for example:

summarize grade1
sum grade1
su grade1

Each of the three commands will cause stata to summarize the values of variable “grade1,” but “su” may be unclear down the road.  The abbreviation “sum,” on the other hand, is clear in its meaning and considerably shorter than the full “summarize.”  As with all things, the key here is consistency.  If the do-file is well documented and the abbreviations are consistently applied then there should be no problems.

I also reject any argument that spacing, tabs, multiple lines, or fully spelled out commands slow down the execution of my do-file.  If an extra line of spacing makes my file easier to read, a few extra nanoseconds is irrelevant.

f. Stata is not C

To quote from Cox (2005), “Stata is not C” (or Pascal for that matter).  It is not necessary to delimit and end commands with a semi-colon.  For those not familiar with the command, using:

#delimit ;

will cause Stata to treat all characters before the semi-colon as part of the same line.  From the earlier example, Stata will treat:

#delimit ;
keep   sid class_id teacher grade1
       grade2 grade3 pass;

the same as:

keep   sid class_id teacher grade1 ///
       grade2 grade3 pass

Cox (2005) describes the use of /// as “tidier.”  I agree with this, and would extend the rationale for avoiding the semi-colon by pointing out that most of the time my commands remain on a single line.  It is less likely I will need to span multiple lines with a ///, and when I do it is generally because I’m trying to improve readability.  The use of /// at the end of lines also allows me to place comments after the /// to explain why I am organizing my command the way I am.  The /// is more elegant and in keeping with Stata’s design — switching to a semi-colon requires actually telling Stata to behave counter to its default mode.

With that being said, the semi-colon delimiter has its time and place.  However, it should be the exception, not the rule.

Some final comments

Research in a vacuum is useless.  We, as researchers, have an obligation to communicate our work — both what we have done/discovered and how we discovered it.  Documenting how and why we do what we do is an important part of this communication process, and if we are using Stata (or any other statistical package for that matter), our code is part of our documentation.  My goal for every do-file is that anybody else with a passing familiarity with Stata should be able to review my file and understand what I did as well as the how and why behind it.

None of what has been written here is intended to be “The Law of Stata” or any hard and fast rule of coding.  It certainly isn’t “wrong” to do things in another way.  These guidelines are intended to be helpful — and they will be for some, they may not be for others.  I encourage comments and suggestions, and I intend to update this post as Stata changes or as conventions change.

Improving Literacy Beyond School Walls

Save the Children is an International, non-government (nonprofit) organization focused on providing services and support to children in 120 countries around the world. Literacy Boost is a key education program for Save the Children, providing student materials, teacher professional development, assessment programs, and community based programs to target communities.

Literacy Boost is a particularly important program in the Philippines.  Students often have difficulty accessing education, particularly in rural areas.  In Fall, 2014, I conducted a program evaluation for Save the Children Philippines to help the organization better understand how programs and resources were being implemented in the classroom by teachers following intensive professional development.  I spent five weeks in the Philippines including one week of classroom observations and interviews in Metro Manila and two weeks of observations in South Central Mindanao.  From the executive summary:

Literacy Boost, a comprehensive community-focused literacy program, was implemented by the Philippine Country Office of Save the Children in two target areas beginning in 2012. Key among the strategies included in Literacy Boost is a proprietary teacher professional development program designed to improve literacy instruction in targeted primary grade classrooms. The professional development compliments a national curricular shift to “mother tongue-based multilingual instruction” mandated by the Philippine Department of Education (DepEd). This evaluation was conducted in the two Save the Children program implementation areas to assess the level to which trained teachers are implementing Literacy Boost Strategies in their classrooms. Research teams observed 29 classrooms, 10 in Metro Manila and 19 in South Central Mindanao, and conducted interviews with school administrators, teachers, and students to determine the extent to which strategies were being implemented and to answer six specific research questions related to program implementation.

In addition to the importance of the field research, the experience provided an incredible opportunity to see the Philippines from a non-tourist perspective, immersing myself in the communities and the culture.  I made connections with incredible educators and community activists who are working against generational poverty and historically low literacy rates in incredibly resource-scarce communities to make a difference in the lives of thousands of students.  My final report will be released by Save The Children, but here are some of the pictures from my experience.

Resources for Brandman University DNP Students

Dear Biostatisticians-

This page contains resources to help with the statistical analysis you will be conducting for your Clinical Scholarly Project.  Please use the menus above (Resources>DNPU-701) to find links and resources, instructions for getting help with your statistical analysis, videos from the Brandman BioStats Class, SPSS Help, and example articles.