Tasks Versus Skills Part 6 & Close: An Instructional Quality AI Agent (IQA)
A 7-part series on AI, the state of tasks versus skills, and some provocative what-if concepts for Learning and Development, HR and Talent teams.
This is Part 6 of a series called Tasks Versus Skills: Part 1: Introduction, Part 2: Playbook, Part 3: Let Learning Breathe, Part 4: Task Intelligence Control Room, Part 5: Tasks as EX Product, Part 6: IQA Prototype, Part 7: Talent Is Not a Commodity; Google Books “Tasks vs Skills” free ebook (Parts 1-6 only).
What-If #4: An Instructional Quality AI Agent (IQA) and Close
Measuring content quantity and consumption is a given, measuring instructional quality is next
(Photos by Hermes Rivera & Maite Paternain on Unsplash)
Whether we like it or not, a key measure for learning adoption has been volume-based. More specifically, a course may be considered successful if there is high consumption, a high completion rate, or a great deal of hours taken per year, and per learner. There is merit in this as a volume-based measure is fairly accurate when a course is tracked in a Learning Management System LMS or other platforms. And in many cases, compliance and regulatory standards particularly in pharma, energy, manufacturing, financial services, and other industries require a similar quantity measure.
For those of us working in Learning and Development (L&D), this topic of “learning hours” has been a bit contentious with plenty of pros and cons. On one side, tracking by learning hours or quantity of build/deploy provides a reasonably correct understanding of uptake, visibility, and a uniform understanding of significance. On the other side, tracking metrics via engagement, effectiveness, or quality of assessment/design measures can provide more accurate insights on the breadth of work and workers’ interests, as well as relevance to the business.
Our third What-If, An Instructional Quality AI Agent begs the question… What if we could more accurately measure the overall quality of the course beyond consumption or compliance? Was it designed completely and correctly? What if we had a measure or tool to evaluate the Instructional Quality of an existing course--regardless if it was built internally or purchased externally--as well as gauge the quality of design for a new course under construction? What if this tool leveraged industry-acknowledged frameworks and a set of Instructional Quality ‘dimensions’ that allowed us to score various formats from in-person courses to online programs?
And the most important what-if… What if we can task an AI Agent to do this for us, at scale, and with consistent objectivity?
Welcome to our forth and final concept.
Introducing our Instructional Quality AI Agent or IQA.
This is a custom-built AI app or prototype I’ve developed in OpenAI’s API engine.
In the spirit of ‘if you want to learn a tool, use the tool’, this final Task vs Skills section is less of a read, but more of a hands-on experience to understand how organized task-entries influence the output quality from an LLM. A parallel goal is to test this hypothesis:
An AI agent trained on sound instructional design principles can reliably evaluate and score the quality and efficacy of training content, providing accurate and comprehensive insights beyond traditional consumption or completion metrics.
Here are the general steps or process you’ll follow with IQA, your new Instructional Quality Agent:
Let’s break this down a bit. There are six general principles I’m using that guide how IQA will be structured, how it processes reliable inputs such as industry frameworks and proposed Quality Dimensions, and how to best leverage an LLM’s use of reason and logic—albeit we’re still in the early days.
My Six General IQA Principles
1. As the critique of “Quality” can be subjective and guided by its own biases, it’s critical that we use as much objective judgment with this approach.
2. Related, we will leverage accepted industry methods, models, or frameworks for this objectivity.
3. As these frameworks have their own focus, philosophies on approach, and varied uses or methods, we will consolidate or harmonize all IQA findings with Quality Dimensions or attributes, adding another layer of objectivity and detail.
4. It’s recognized that an LLM’s interpretation regarding critique or scrutiny can often be all over the place, both vague and ambiguous. So, let’s add some user controls to determine the level of critique.
5. Repeat… This is an exploration or prototype. The choice of frameworks and dimensions are not based on science or data. That said, users are encouraged to modify, add, or remove built-in frameworks and methods to make this tool your own.
6. Finally, as IQA is an experiment, the intention is to highlight the importance of this overall Tasks versus Skills core focus on task-centricity and task-accuracy. Moreso, this is more of a proof-of-concept where we L&D colleagues can further debate, improve upon, and continue to leverage AI’s capabilities in areas we have contemplated and struggled with within our L&D industry.
Background: Frameworks and Dimensions
IQA will leverage 10 industry approaches or frameworks as part of its “objective” analysis, resulting in summarization and recommendations. As this is a proof-of-concept, these models are certainly up for debate. That said, I am categorizing these frameworks into three areas showcasing requirements primarily from a corporate training or workforce development perspective.
These three categories: a) Course Design, or was the course crafted to meet its instructional goals with a strong level of efficacy; b) Transfer and Application, or will the learner be able to use and apply what was gathered from the course in their job or intention, and; c) Performance Management, or does the course directly or indirectly affect one’s broader fulfillment of their job, tasks or career.
The 10 models or frameworks (adding detail from Perplexity):
Frameworks
COURSE DESIGN
Dick and Carey Instructional Design Model: The Dick and Carey Instructional Design Model is a systematic approach to designing and developing instructional materials. Developed by Walter Dick and Lou Carey in 1978, this model views instruction as a system where all components (instructor, learners, materials, instructional activities, delivery system, and learning environment) work together to achieve desired learning outcomes. Additional detail.
SAM (Successive Approximation Model): The Successive Approximation Model (SAM) is an agile, iterative approach to instructional design developed by Dr. Michael Allen in 2012. It emphasizes rapid prototyping, collaboration, and flexibility throughout the development process. Additional detail.
Shackleton 5Di Model (Nick Shackleton): Developed by Nick Shackleton-Jones, is a user-centered learning design process that focuses on developing resources and experiences to support performance and development. It serves as an alternative to the traditional ADDIE model, addressing the need for a more iterative approach in the digital age and emphasizing learner needs over business requirements. Additional detail.
Learning Arches and Learning Spaces (Kaospilot): Learning Arches is a methodology for designing and visualizing transformative, collaborative, and experiential learning experiences. Developed by Simon Kavanagh at Kaospilot, an entrepreneurial design school, this approach draws inspiration from story arcs to create a learning journey in three purposeful acts: set, hold, and land. Additional detail.
TRANSFER & APPLICATION
Learning Transfer Evaluation & The Decisive Dozen (Dr. Will Thalheimer, PhD): The Learning Transfer Evaluation Model (LTEM) is an eight-tier evaluation framework designed to help organizations assess and improve the effectiveness of their learning interventions. Additional detail.
Action Mapping (Cathy Moore): Action Mapping is a visual approach to training design that aims to create action-packed materials dedicated to improving business performance. It emphasizes changing what people do, not just what they know, by focusing on measurable business goals and designing realistic practice activities. Additional detail.
Wiggins and McTighe Backwards Design Model (UbD): The Wiggins and McTighe Backwards Design Model, also known as Understanding by Design (UbD), is an instructional design framework that emphasizes starting with the desired learning outcomes and working backwards to develop curriculum, assessments, and instruction. This approach prioritizes learning transfer and work application. Additional detail.
PERFORMANCE MANAGEMENT
Human Performance Technology Model (Mager and Pipe): Robert Mager and Peter Pipe’s HPT framework emphasizes a step-by-step approach to identify the root causes of performance issues and determine the most appropriate solutions, with a focus on cost-effectiveness and efficiency. Additional detail.
Behavior Engineering Model (BEM) (Thomas F. Gilbert): The Behavior Engineering Model is a systematic framework for analyzing and improving human performance in organizations. It focuses on identifying and addressing factors and learning opportunities that influence behavior and performance at both the environmental and individual levels. Additional detail.
ADDIE Model (Analysis, Design, Develop, Implement, Evaluate): The ADDIE model is a systematic instructional design framework used to create effective training and educational programs that impact a worker’s performance. Developed in the 1970s, it remains one of the most widely used instructional design models due to its simplicity and effectiveness. Additional detail.
While the 10 frameworks or models provide industry guidance, their intended use cases may differ. Some may fall into the actual needs analysis and design phase. Some are more intended to evaluate and may align how the course content will be most relevant to the learners’ work etc.
To merge or unify findings into a consistent Quality summary that also provides more actionable detail for an instructional designer or external provider, here are 17 attributes our IQA prototype will use for the summary and set of recommendations. These attributes were developed with Marc Zao Sander’s Filtered team and highlighted in my prior article “The Nutrition of Training Content & Six Regenerative Learning Considerations”. For the final scoring, I’ve provided a numerical weight as a rubric. Again, as this is an exploratory prototype, these dimensions and prior frameworks are not based on any scientific approach, and all are up for healthy debate.
Quality Dimensions [and numerical weight]
Engagement: Drives active participation and sustained learner motivation [12]
Interactivity: Real-time practice, simulation, and collaborative learning [10]
Accessibility: Ensures inclusive learning for all abilities and situations [8]
Visual Design: Clean, professional aesthetics supporting learning [7]
Reliability: Consistent quality across all content elements [7]
Innovation: Modern, tech-enabled approaches to skill development [6]
Actionability: Directly applicable to job tasks and performance improvement 6]
Feedback & Assessment: Clear metrics and timely feedback for learning validation [6]
Learner Support: Resources and tools enabling successful completion [6]
Structure: Logical flow with clear learning pathways [6]
Topicality: Current, relevant, and aligned with industry needs [5]
Cultural Inclusivity: Respects and reflects diverse perspectives [5]
Suitability: Appropriate language and tone for target audience [4]
Format Variety: Multiple learning modalities and delivery options [4]
Authoritativeness: Evidence-based content from verified expert sources [4]
Objectivity: Balanced presentation without commercial bias [1]
Findability: Easy content location and navigation [1]
________
Ready to start?
Your steps are fairly straight-forward. Note these important pointers:
This is a prototype or proof-of-concept to see if a customized AI Agent can be trained using proven L&D frameworks to determine the instructional quality of a course or training asset. That is, IT IS NOT INTENDED TO BE COMPREHENSIVE NOR EXPECTED TO BE SUPER ACCURATE AS A V1.
IQA v1 is limited in features. Not a lot of bells and whistles at this point, and certainly not industry-grade. This also means IQA MAY BE A BIT SLOW IN THIS FIRST DEPLOYMENT.
Following OpenAI API Terms, your uploaded courses, files and links are secure and follow OpenAIs core Data and Privacy guidelines (Data Collection and Security, Use of APIs, No Data Used for Training, Transparency).
As noted throughout IQA’s instructions:
You can tweak or customize each evaluation step by removing an existing framework or model (e.g., “I prefer not using the ADDIE in this round”).
Related, if there’s a model or framework Not in IQA’s existing list, you can add your own, or from an additional source.
With each evaluation result, you can ask IQA to re-focus on certain findings, double-down on one result etc. Again, results may not be detailed or accurate as a v1.
Finally, for the two rubrics IQA is using (a 1-100 scale for the 10 Frameworks evaluation, and a 1-10 scale for the 17 Quality Dimensions evaluation), the results are not scientific, truly evidence-based, and not exact. Again, as this is a proof-of-concept, we want to see if this concept is directionally correct, and worth advancing.
IQA Instructions
Access the Instructional Quality Agent prototype here:
https://learncontent-iqe.streamlit.app/
After the Introduction, select “Let Start”.
On the next screen, review the Instructions and Upload Your Course Materials section.
Upload:
If you’re uploading course files, IQA can accept docs, PDFs, MP3s etc. However, for this v1, there is a 200MB LIMIT PER FILE SIZE and IQA DOES NOT CURRENTLY WORK WITH ZIP FILES.
If you’re pasting a URL, for this v1, WE CAN ONLY SUPPORT YOUTUBE URLs. There are 100s you can use as a test here.
If you are attaching a YouTube course URL, IQA ONLY WORKS IF A TRANSCRIPT IS AVAILABLE.
After your upload, IQA will provide a summary to reconfirm your submission, and you will be asked to validate the uploaded course
Submit your response or simply add a “continue” or similar
The remaining IQA steps are straight-forward:
Provide a 1-10 level of depth or detail you prefer for the quality evaluation
Provide evaluation results for the 10 Frameworks
Provide summary evaluation results using the 17 Quality Dimensions that aggregate or combine all prior Frameworks results.
Provide and overall summary
Provide a downloadable summary of all results
I’m hopeful the IQA prototype or What-If concept provides a practical example of how we can leverage a task-specific model benefitting our L&D industry, via the evaluation of Quality versus the norm of Quality. Further, this tool is a summation of many themes and practices listed throughout this publication.
On a related note, if you are eager to continue this experiment, or perhaps a v2, please reach out here.
Final Note and Close: An Ambitious and Optimistic Future
“The future is already here. It’s just not evenly distributed” – William Gibson
A Boy Admires Green, Red, and Blue, by Mark Rothko, 1955, photo by Michael Newman. Source: Flickr; Lewis, Shane. "How Did Rothko Seek to Transcend the Visible World?" TheCollector.com, October 4, 2024.
One last metaphor. For artist Mark Rothko, the “primacy of light” was his Purpose and guided his choice of colors in his paintings: “It’s not color, it’s light.” If a skill has primacy, a task has recency. A skill is the color seen. A task provides the light for that color. You can’t have one without the other. Here's a less abstract explanation:
When we learn or remember things, two key patterns emerge. First, we tend to remember what we learned at the beginning (primacy effect) -- think of it like building a foundation. Second, we easily recall what we learned most recently (recency effect) -- like remembering the end of a movie better than the middle.
Skills are like that foundation - they benefit from primacy because they're fundamental building blocks that need to be solid and reliable. On the other hand, specific tasks we need to do are more influenced by recency - we focus on and act on what's fresh in our minds.
The magic happens when these two elements work together: our well-established skills and SBO motivations become most powerful and meaningful when we actively apply them with current, pressing tasks. It's like having a strong musical foundation on piano (skill) and using it to learn a new song (task)--the established skill makes the recent task possible, while the task gives the skill purpose and keeps it sharp.
Squaring the circle
While mathematicians state this geometric challenge is fairly impossible, the integration of tasks and skills presents a more promising equation. Just as we understand that recent tasks benefit from deeply established skills (recency and primacy effects), we can see how the apparent tension between tasks and skills is not an unsolvable puzzle; it's a dynamic partnership waiting to be fully realized—especially with the vitality of AI.
The future of L&D lies not in choosing between tasks or skills, but in understanding their natural symbiosis. Cojoined twins. Skills provide the foundational strength that makes task execution possible, while tasks give skills their practical purpose and keep them relevant. This relationship becomes even more crucial as AI tools and approached enter the workplace, where it excels at task automation but relies on human skills for direction, refinement, and meaning. Augmented, mutual, productivity.
For L&D teams, this represents not an impossibility but an opportunity – our “blue moon” moment. By embracing both the quantifiable nature of tasks and the qualitative depth of skills, we can create learning ecosystems, experiences and new types of learning engagement that are practical and sustainable.
The future workplace won't be solely skills-based or task-driven, it will harmoniously be both. This isn't an actual squaring of the circle; it's recognizing that the circle and square were always meant to complement each other. That is, Tasks are not a footnote.
As L&D professionals, we stand at the threshold of a new era. And like building an effective learning experience, Learning takes its best shape when we are challenged to grow.
Everyday is an opportunity to be Uncomfortably Excited.
Links: Part 1: Introduction, Part 2: Playbook, Part 3: Let Learning Breathe, Part 4: Task Intelligence Control Room, Part 5: Tasks as EX Product, Part 6: IQA Prototype, Part 7: Talent Is Not a Commodity; Google Books “Tasks vs Skills” free ebook (Parts 1-6 only).
Special thanks to colleagues who guided me as contributors and reviewers, and to so many who have inspired my thinking and curiosity on this subject.
Contributors: Cathy Moore, Clark Quinn, Felipe Hessel, Gianni Giacomelli, Giri Coneti, Jon Fletcher, Julie Dirksen, Megan Torrance, Nick Shackleton-Jones, Nina Bressler, Will Thalheimer, Ross Dawson
Inspirations: Allie K. Miller, Amanda Nolen, Andrew Kable (MAHRI), Bhaskar Deka, Brandon Carson, Brian Murphy, Chara Balasubrmanian, Dani Johnson, Darren Galvin, Dave Buglass Chartered FCIPD, MBA, Dave Ulrich, David Green 🇺🇦, David Wilson, Deborah Quazzo, Dennis Yang, Detlef Hold, Donald Clark, Donald H Taylor, Dr Markus Bernhardt, Dushyant Pandey, Egle Vinauskaite, Emma Mercer (Assoc CIPD, MLPI), Ethan Mollick, Gordon Trujillo, Guy Dickinson, Harish Pillay, Hitesh Dholakia, Isabelle Bichler-Eliasaf, Isabelle Hau, Joel Hellermark, Joel Podolny, Johann Laville, Jon Lexa, Josh Bersin, Josh Cavalier, Joshua Wöhle, Julian Stodd, Karen Clay, Karie Willyerd, Kate Graham, Kathi Enderes, Marga Biller, Marc Zao-Sanders, Meredith Wellard, Mikaël Wornoo🐺, Nico Orie, Noah G. Rabinowitz, Nuno Gonçalves, Oliver Hauser, Orsolya Hein, Patrick Hull, Peter Meerman, Peter Sheppard, Dr Philippa Hardman, Raffaella Sadun, Ravin Jesuthasan, CFA, FRSA, René Gessenich, Ross Dawson, Ross Garner, Sandra Loughlin, PhD, Simon Brown, Stacia Sherman Garr, Stefaan van Hooydonk, Stella Collins, Trish Uhl, PMP 👋🏻, Tony Seale, Zara Zaman
Acknowledging leveraging Perplexity, Gemini, ChatGPT, and Claude for research, formatting, testing links, challenging assumptions and aiding the creative process.
LICENSING
Unless otherwise noted, the contents of this series are licensed under the Creative Commons Attribution 4.0 International license.
Should you choose to exercise any of the 5R permissions granted you under the Creative Commons Attribution 4.0 license, please attribute me in accordance with CC's best practices for attribution.
If you would like to attribute me differently or use my work under different terms, contact me at https://www.linkedin.com/in/marcsramos/.
ADDITIONAL RESOURCES (ALL 7 PARTS OF THIS SERIES)
Wen, J., Zhong, R., Ke, P., Shao, Z., Wang, H., & Huang, M., (2024), “Learning task decomposition to assist humans in competitive programming”, Proceedings of the 2024 Conference of the Association for Computational Linguistics (ACL 2024), https://arxiv.org/pdf/2406.04604
Wang, Z., Zhao, S., Wang, Y., Huang, H., Shi, J., Xie, S., Wang, Z., Zhang, Y., Li, H., & Yan, J., (2024), “Re-TASK: Revisiting LLM tasks from capability, skill, and knowledge perspectives”, arXiv, https://arxiv.org/abs/2408.06904
Hubauer, T., Lamparter, S., Haase, P., & Herzig, D., (2018), “Use cases of the industrial knowledge graph at Siemens”, Proceedings of the International Workshop on the Semantic Web, CEUR Workshop Proceedings, https://ceur-ws.org/Vol-2180/paper-86.pdf
Erik Brynjolfsson, Danielle Li, Lindsey R. Raymond, “Generative AI At Work”, http://www.nber.org/papers/w31161 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 April 2023, revised November 2023, https://www.nber.org/system/files/working_papers/w31161/w31161.pdf
https://www.royermaddoxherronadvisors.com/blog/2016/april/the-knowledge-worker-in-healthcare/
Connie J.G Gersick, J.Richard Hackman, (1990), “Habitual routines in task-performing groups, https://www.sciencedirect.com/science/article/pii/074959789090047D
Organizational Behavior and Human Decision Processes”, Volume 47, https://doi.org/10.1016/0749-5978(90)90047-D
World Economic Forum, (2023), “Future of Jobs Report 2023”, World Economic Forum, https://www3.weforum.org/docs/WEF_Future_of_Jobs_2023.pdf
Boston Consulting Group, (2024), “How people can create—and destroy—value with generative AI”, BCG, https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai
Kaisa Savola, (2024), Challenges Preventing Organizations from Adopting a Skills-Based Model”, LinkedIn, https://www.linkedin.com/pulse/challenges-preventing-organizations-from-adopting-savola-she-her--24c6f/
Patel, A., Hofmarcher, M., Leoveanu-Condrei, C., Dinu, M.-C., Callison-Burch, C., & Hochreiter, S., (2024), “Large language models can self-improve at web agent tasks”, arXiv, https://arxiv.org/abs/2405.20309
Sagar Goel and Orsolya Kovács-Ondrejkovic, (2023), “Your Strategy Is Only as Good as Your Skills”, BCG Publications, https://www.bcg.com/publications/2023/your-strategy-is-only-as-good-as-your-skills
“Workplace 2.0: The Promise of the Skills-Based Organization”, (2024), Udemy Business, https://info.udemy.com/rs/273-CKQ-053/images/udemy-business-workplace-2.0-promise-skills-based-organization.pdf?version=0
Shakked Noy and Whitney Zhang (2023): “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence”, MIT Economics Department working paper. Retrieved March 13, 2023 from https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1_0.pdf
“AI at Work Is Here. Now Comes the Hard Part.”, (2024), 2024 Work Trend Index Annual Report from Microsoft and LinkedIn, https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part
“LinkedIn Workplace Learning Report”, (2024), LinkedIn Learning, https://learning.linkedin.com/content/dam/me/business/en-us/amp/learning-solutions/images/wlr-2024/LinkedIn-Workplace-Learning-Report-2024.pdf
Macnamara BN, Berber I, Çavuşoğlu MC, Krupinski EA, Nallapareddy N, Nelson NE, Smith PJ, Wilson-Delfosse AL, Ray S. “Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers' awareness?”, Cogn Res Princ Implic. 2024 Jul 12;9(1):46. doi: 10.1186/s41235-024-00572-8. PMID: 38992285; PMCID: PMC11239631, https://pmc.ncbi.nlm.nih.gov/articles/PMC11239631/
Matt Beane, (2024), “The Skill Code: How to Save Human Ability in an Age of Intelligent Machine”, Harper, https://mitsloan.mit.edu/ideas-made-to-matter/how-can-we-preserve-human-ability-age-machines
Lazaric, N., (2012), “Routinization of Learning”, Encyclopedia of the Sciences of Learning. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1428-6_246
Hering, A., & Rojas, A., (2024), “AI at Work: Why GenAI Is More Likely To Support Workers Than Replace Them.”, Indeed Hiring Lab, https://www.hiringlab.org/2024/09/25/artificial-intelligence-skills-at-work/
Morgan, T. P., (2024), “How GenAI Will Impact Jobs In the Real World”, HPCwire, https://www.hpcwire.com/2024/09/30/how-genai-will-impact-jobs-in-the-real-world/
https://trainingindustry.com/magazine/fall-2022/improving-the-productivity-of-knowledge-workers/
Balasubramanian, R., Kochan, T. A., Levy, F., Manyika, J., & Reeves, M., (2023), “How People Can Create—and Destroy—Value with Generative AI”, Boston Consulting Group, https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai
Marcin Kapuściński, (2024), “Boosting productivity: Using AI to automate routine business tasks”, TTMS, https://ttms.com/boosting-productivity-using-ai-to-automate-routine-business-tasks/
https://scheibehenne.com/ScheibehenneGreifenederTodd2010.pdf
Paulo Carvalho, (2024), “How Generative AI Will Transform Strategic Foresight”, IF Insight & Foresight, https://www.ifforesight.com/post/how-generative-ai-will-transform-strategic-foresight
Deok-Hwa Kim, Gyeong-Moon Park, Yong-Ho Yoo, Si-Jung Ryu, In-Bae Jeong, Jong-Hwan Kim, (2017), “Realization of task intelligence for service robots in an unstructured environment”, Annual Reviews in Control, Volume 44, ISSN 1367-5788, https://doi.org/10.1016/j.arcontrol.2017.09.013.
Kweilin Ellingrud, Saurabh Sanghvi, Gurneet Singh Dandona, Anu Madgavkar, Michael Chui, Olivia White, Paige Hasebe, (2023), “Generative AI and the future of work in America”, McKinsey Global Institute, https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america#/
MIT Technology Review Insights, (2024), “Multimodal: AI’s new frontier”, https://wp.technologyreview.com/wp-content/uploads/2024/05/Jina-AI-e-Brief-v4.pdf?utm_source=report_page&utm_medium=all_platforms&utm_campaign=insights_ebrief&utm_term=05.02.24&utm_content=insights.report
Sandra Ohly, Anja S. Göritz, Antje Schmitt, (2017), “The power of routinized task behavior for energy at work”, Journal of Vocational Behavior, Volume 103, Part B, https://doi.org/10.1016/j.jvb.2017.08.008
Baddeley, A.D. (2003). Working memory: looking back and looking forward. Nature Reviews Neuroscience, 4, p.829-839, https://pubmed.ncbi.nlm.nih.gov/14523382/
Baddeley, A.D. and Hitch, G. (1974). Working Memory. Psychology of Learning and Motivation, 8, p.47-89, https://www.sciencedirect.com/science/article/pii/S0960982209021332
Chandler, P. and Sweller, J. (1991). Cognitive Load Theory and the Format of Instruction. Cognition and Instruction, 8 (4), p. 293-332, https://www.tandfonline.com/doi/abs/10.1207/s1532690xci0804_2
Chandler, P. and Sweller, J. (1992). The split-attention effect as a factor in the design of instruction. British Journal of Educational Psychology, 62 (2), p.233–246, https://bpspsychub.onlinelibrary.wiley.com/doi/abs/10.1111/j.2044-8279.1992.tb01017.x
Clark, R.C., Nguyen, F. and Sweller, J. (2006). Efficiency in learning: evidence-based guidelines to manage cognitive load. San Francisco: Pfeiffer, https://bpspsychub.onlinelibrary.wiley.com/doi/abs/10.1111/j.2044-8279.1992.tb01017.x
Marzano, R.J., Gaddy, B.B. and Dean, C. (2000). What works in classroom instruction. Aurora, CO: Mid-continent Research for Education and Learning, https://bpspsychub.onlinelibrary.wiley.com/doi/abs/10.1111/j.2044-8279.1992.tb01017.x
Sweller, J. (1988). Cognitive Load during Problem Solving: Effects on Learning. Cognitive Science, 12, p.257-285, https://mrbartonmaths.com/resourcesnew/8.%20Research/Explicit%20Instruction/Cognitive%20Load%20during%20problem%20solving.pdf
Wenger, S.K., Thompson, P. and Bartling, C.A. (1980). Recall facilitates subsequent recognition. Journal of Experimental Psychology: Human Learning and Memory, 6 (2), p.135-144, https://journals.sagepub.com/doi/abs/10.1111/j.1745-6916.2006.00012.x?journalCode=ppsa
https://journals.sagepub.com/doi/abs/10.1111/j.1745-6916.2006.00012.x?journalCode=ppsa
Choi, S., Yuen, H. M., Annan, R., Monroy-Valle, M., Pickup, T., Aduku, N. E. L. & Ashworth, A. (2020). Improved care and survival in severe malnutrition through eLearning. Archives of disease in childhood, 105(1), 32-39. Doi:10.1136, https://pubmed.ncbi.nlm.nih.gov/31362946/
Chyung, Y. C. (2007). Learning object-based e-learning: content design, methods, and tools. Learning Solutions e-Magazine, August 27, 2007, 1-9. https://www.elearningguild.com/pdf/2/082707des-temp.pdf.
https://dobusyright.com/druckers-six-ideas-about-knowledge-work-environments-dbr-031/
Doolani, S., Owens, L., Wessels, C. & Makedon, F. (2020). vis: An immersive virtual storytelling system for vocational training. Applied Sciences, 10(22), 8143, https://ouci.dntb.gov.ua/en/works/l1mwRvyl/
Gavarkovs, A. G., Blunt, W. & Petrella, R. J. (2019). A protocol for designing online training to support the implementation of community-based interventions. Evaluation and Program Planning, 72(2019), 77-87, https://psycnet.apa.org/record/2018-61336-010
Kraiger, K. & Ford, J. K. (2021). The science of workplace instruction: Learning and development applied to work. Annual Review of Organizational Psychology and Organizational Behavior, 8, 45-72.
Longo, L. & Rajendran, M. (2021, November). A Novel Parabolic Model of Instructional Efficiency Grounded on Ideal Mental Workload and Performance. International Symposium on Human Mental Workload: Models and Applications (pp. 11-36). Springer, Cham, https://www.researchgate.net/publication/354893819_A_novel_parabolic_model_of_instructional_efficiency_grounded_on_ideal_mental_workload_and_performance
Merrill, M. D. (2018). Using the first principles of instruction to make instruction effective, efficient, and engaging. Foundations of learning and instructional design technology. BYU Open Textbook Network. https://open.byu.edu/lidtfoundations/using_the_first_principles_of_instruction
Rea, E. A. (2021). "Changing the Face of Technology": Storytelling as Intersectional Feminist Practice in Coding Organizations. Technical Communication, 68(4), 26-39, https://g.co/kgs/XAhhL77
Schalk, L., Schumacher, R., Barth, A. & Stern, E. (2018). When problem-solving followed by instruction is superior to the traditional tell-and-practice sequence. Journal of educational psychology, 110(4), 596, https://psycnet.apa.org/record/2017-57179-001
Shipley, S. L., Stephen, J. S. & Tawfik, A. A. (2018). Revisiting the Historical Roots of Task Analysis in Instructional Design. TechTrends, 62(4), 319-320. https://doi.org/10.1007/s11528-018-0303-8
Smith, P. L. & Ragan, T. J. (2004). Instructional design 3rd Ed. John Wiley & Sons, https://www.amazon.com/Instructional-Design-Patricia-L-Smith/dp/0471393533
Willert, N. (2021). A Systematic Literature Review of Gameful Feedback in Computer Science Education. International Journal of Information and Education Technology, 11(10), 464- 470, https://www.researchgate.net/publication/354317969_A_Systematic_Literature_Review_of_Gameful_Feedback_in_Computer_Science_Education