Ksenia Se
Kseniase
AI & ML interests
None yet
Recent Activity
replied to
their
post
3 days ago
From Prompt Engineering to Context Engineering: Main Design Patterns
Earlier on, we relied on clever prompt wording, but now structured, complete context matters more than just magic phrasing. The next year is going to be a year of context engineering which expands beyond prompt engineering. The two complement each other: prompt engineering shapes how we ask, while context engineering shapes what the model knows, sees, and can do.
To keep things clear, here are the main techniques and design patterns in both areas, with some useful resources for further exploration:
▪️ 9 Prompt Engineering Techniques (configuring input text)
1. Zero-shot prompting – giving a single instruction without examples. Relies entirely on pretrained knowledge.
2. Few-shot prompting – adding input–output examples to encourage model to show the desired behavior. ⟶ https://arxiv.org/abs/2005.14165
3. Role prompting – assigning a persona or role (e.g. "You are a senior researcher," "Say it as a specialist in healthcare") to shape style and reasoning. ⟶ https://arxiv.org/abs/2403.02756
4. Instruction-based prompting – explicit constraints or guidance, like "think step by step," "use bullet points," "answer in 10 words"
5. Chain-of-Thought (CoT) – encouraging intermediate reasoning traces to improve multi-step reasoning. It can be explicit ("let’s think step by step"), or implicit (demonstrated via examples). ⟶ https://arxiv.org/abs/2201.11903
6. Tree-of-Thought (ToT) – the model explores multiple reasoning paths in parallel, like branches of a tree, instead of following a single chain of thought. ⟶ https://arxiv.org/pdf/2203.11171
7. Reasoning–action prompting (ReAct-style) – prompting the model to interleave reasoning steps with explicit actions and observations. It defines action slots and lets the model generate a sequence of "Thought → Action → Observation" steps. ⟶ https://arxiv.org/abs/2210.03629
Read further ⬇️
Also subscribe to Turing Post: https://www.turingpost.com/subscribe
posted
an
update
3 days ago
From Prompt Engineering to Context Engineering: Main Design Patterns
Earlier on, we relied on clever prompt wording, but now structured, complete context matters more than just magic phrasing. The next year is going to be a year of context engineering which expands beyond prompt engineering. The two complement each other: prompt engineering shapes how we ask, while context engineering shapes what the model knows, sees, and can do.
To keep things clear, here are the main techniques and design patterns in both areas, with some useful resources for further exploration:
▪️ 9 Prompt Engineering Techniques (configuring input text)
1. Zero-shot prompting – giving a single instruction without examples. Relies entirely on pretrained knowledge.
2. Few-shot prompting – adding input–output examples to encourage model to show the desired behavior. ⟶ https://arxiv.org/abs/2005.14165
3. Role prompting – assigning a persona or role (e.g. "You are a senior researcher," "Say it as a specialist in healthcare") to shape style and reasoning. ⟶ https://arxiv.org/abs/2403.02756
4. Instruction-based prompting – explicit constraints or guidance, like "think step by step," "use bullet points," "answer in 10 words"
5. Chain-of-Thought (CoT) – encouraging intermediate reasoning traces to improve multi-step reasoning. It can be explicit ("let’s think step by step"), or implicit (demonstrated via examples). ⟶ https://arxiv.org/abs/2201.11903
6. Tree-of-Thought (ToT) – the model explores multiple reasoning paths in parallel, like branches of a tree, instead of following a single chain of thought. ⟶ https://arxiv.org/pdf/2203.11171
7. Reasoning–action prompting (ReAct-style) – prompting the model to interleave reasoning steps with explicit actions and observations. It defines action slots and lets the model generate a sequence of "Thought → Action → Observation" steps. ⟶ https://arxiv.org/abs/2210.03629
Read further ⬇️
Also subscribe to Turing Post: https://www.turingpost.com/subscribe
posted
an
update
10 days ago
6 Comprehensive Resources on AI Coding
AI coding is moving fast, and it’s getting harder to tell what actually works. Agents, workflows, context management and many other aspects are reshaping how software gets built.
We’ve collected a set of resources to help you understand how AI coding is evolving today and what building strategies work best:
1. https://huggingface.co/papers/2508.11126
Provides a clear taxonomy, compares agent architectures, and exposes practical gaps in tools, benchmarks, and reliability that AI coding agents now struggle with
2. https://huggingface.co/papers/2511.04427
This survey from Carnegie Mellon University shows causal evidence that LLM agent assistants deliver short-term productivity gains but have lasting quality costs that can slow development over time
3. https://huggingface.co/papers/2510.12399
Turns Vibe Coding from hype into a structured field, categorizing real development workflows. It shows which models, infrastructure, tool requirements, context, and collaboration setups affect real software development outcomes
4. https://huggingface.co/papers/2511.18538 (from Chinese institutes and companies like ByteDance and Alibaba)
Compares real code LLMs, shows how training and alignment choices affect code quality and security, and connects academic benchmarks to everyday software development
5. Build Your Own Coding Agent via a Step-by-Step Workshop⟶ https://github.com/ghuntley/how-to-build-a-coding-agent
A great guide that covers the basics of building an AI-powered coding assistant – from a chatbot to a file reader/explorer/editor and code search
6. State of AI Coding: Context, Trust, and Subagents⟶ https://www.turingpost.com/p/aisoftwarestack
Here is our in-depth analysis of where AI coding is heading and the new directions we see today – like agent swarms and context management importance – offering an emerging playbook beyond the IDE
If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe