I regularly inherit TIA Portal projects from other engineers. Some are from agencies, some from system integrators, some from internal team members who've moved on. The pattern is almost always the same: the system works, more or less, but the code is structurally unsound in ways that will make it progressively harder to modify, debug, and support.
These aren't necessarily bad engineers. Most of them are competent programmers who know TIA Portal's features well. The problem is that knowing how to use a tool isn't the same as knowing how to structure a project that will survive contact with the real world - where requirements change, operators do unexpected things, and the maintenance team who inherits your work has never seen your code before.
Here are the patterns I see most often, and what I'd do differently.
1. Everything in OB1
The most obvious and most common. The main Organisation Block becomes a dumping ground for logic that should be in function blocks. Conditional branches that span hundreds of networks. Timers and counters scattered throughout with no clear relationship to the functions they serve. Temporary variables reused across unrelated sections of logic.
This usually starts innocently - a quick modification here, a new feature there - but the cumulative effect is code that can only be understood by reading it linearly from top to bottom, because nothing is encapsulated and everything depends on everything else.
The fix is modular architecture from day one. Each functional area of the machine gets its own function block with clearly defined inputs and outputs. OB1 becomes an orchestration layer that calls these blocks and passes data between them. If a new engineer needs to understand the infeed conveyor logic, they open the infeed FB - they don't scroll through 400 networks looking for the relevant sections.
2. No naming convention - or worse, a bad one
I've worked on projects where every DB is named DB1, DB2, DB3 and every FB is FB1, FB10, FB20. The original engineer presumably knew that DB7 was the recipe data block and FB30 was the reject handling, but nobody else does, and six months later neither will they.
Equally problematic are conventions that encode too much information into the name. I've seen tag names like M_CYL_03_FWD_SOL_Y_PB_CMD which technically contain every piece of metadata about the tag but require a decoder ring to read. If you need to consult a legend to understand what a variable does, the naming convention has failed.
Good naming is descriptive at the right level of abstraction. fbInfeedConveyor tells you what the block does. diBoxPresent tells you what the input signal means. rRecipeTargetWeight tells you the type and the purpose. If someone who's never seen the project can read the code and follow the logic, the naming is working.
3. Hardcoded values everywhere
Magic numbers in comparison blocks. Timer presets buried in network logic rather than parameterised. Speed setpoints written directly into move commands. Every one of these is a maintenance burden waiting to happen, because when something needs to change, someone has to find every place the value appears - and they'll miss one.
All configurable values should live in a parameter data block, preferably one per functional area or machine module. Timer presets, speed limits, threshold values, tolerance bands - anything that might conceivably need adjustment during the life of the machine. This also makes commissioning faster, because you can tune the system by adjusting parameters in one place rather than hunting through logic networks.
4. No state machine - or a fake one
Sequential processes need state machines. This isn't optional. Without explicit state management, sequential logic becomes a web of interlocks, set/reset coils, and edge triggers that's impossible to follow and extremely fragile when you need to add a new step or modify the sequence.
The "fake state machine" is almost worse: a state variable that gets set in different places throughout the code, with transitions scattered across multiple networks. It looks like there's a state machine, but the state can change from anywhere, which means you can't reason about what state the system is in just by reading the state machine logic - you have to read everything.
A proper state machine has a single place where transitions are evaluated and the state variable is updated. Every state is an explicitly defined ENUM value, not a magic number. Transitions have clear conditions. Entry and exit actions are handled consistently. When someone needs to debug a sequence problem, they open the state machine FB and they can see every possible state and every possible transition in one place.
If you can't describe your state machine's states and transitions clearly, your code doesn't have one - it has a collection of conditional logic that happens to produce sequential behaviour until something unexpected happens.
5. Comments that describe what the code does instead of why
// Start motor above a line that starts a motor tells you nothing you couldn't see from reading the code. // Motor must reach speed before reject gate activates - 2s delay accounts for VFD ramp time at minimum load tells you something useful. It tells you why the timer is there, what assumption it's based on, and what might need to change if the VFD parameters are adjusted.
The best comments explain design decisions, constraints, and assumptions. They answer the question a maintenance engineer will ask three years from now: "why is this done this way?" If the answer is obvious from the code, you don't need a comment. If it isn't, you do.
6. No separation between machine logic and HMI logic
I regularly see PLC code that's been structured around the HMI screens rather than around the machine functions. The logic for a particular screen's controls and indicators is mixed in with the process logic that those controls operate. This creates tight coupling between the PLC programme and the HMI design, which means you can't modify one without breaking the other.
The PLC should expose a clean interface to the HMI - commands in, statuses out - through a dedicated HMI data block or set of data blocks. The process logic operates independently and doesn't know or care what the HMI looks like. This means the HMI can be redesigned without touching the PLC code, and the PLC logic can be modified without breaking every screen.
What all of these have in common
Every one of these problems is a consequence of optimising for getting the system working rather than keeping the system working. The code that's fastest to write during commissioning is almost never the code that's cheapest to maintain over the next ten years. And since control systems typically run for ten to twenty years, the maintenance cost dwarfs the development cost - but it's invisible at the point where the decisions are made.
This is the fundamental tension in controls engineering, and it's why experience matters. An engineer who's only ever built new systems doesn't feel the pain of maintaining bad ones. An engineer who's spent years inheriting other people's code writes differently, because they know exactly what it feels like to open a project at 2am during a production stoppage and not be able to find anything.
If you're building a new TIA Portal project, the single most valuable thing you can do is assume that someone who's never seen your code will need to modify it under pressure. Structure it for them, not for you.