Industrial automation proyects

Showcasing projects in industrial automation using Siemens TIA Portal and Factory IO to simulate real-world processes. Future expansions will include more technologies to tackle diverse automation challenges.

Project 1: Tank Level Control with PID

This project demonstrates controlling a tank’s filling level using a PID controller. The system uses three key components:

  1. Proportional (P): Adjusts the input flow based on the difference between the current tank level and the setpoint. A larger error results in a larger correction.

  2. Integral (I): Accounts for accumulated errors over time, helping to eliminate any steady-state error and ensuring the tank level reaches and stays at the desired value.

  3. Derivative (D): Responds to the rate of change of the error, predicting future errors and adjusting the flow to prevent overshooting or oscillations.

Together, these components follow a formula where the control output is the sum of these three terms:
Output = Kp Error + Ki Integral(Error) + Kd * Derivative(Error)

In Factory IO and TIA Portal, the system requires conversions between int and real data types because the analog world must be discretized for digital processing. The PID values must be continuously adjusted based on the desired response. For this project, I used the PID Compact function block from the S7-1500, where you input the setpoint and receive the adjusted output, allowing for precise control of the tank’s level under varying conditions.

Additionally, to ensure fast and reliable execution, the program uses a Cyclic Interrupt Organization Block. This block provides a rapid execution cycle, ensuring the PID control loop and critical operations run quickly and consistently for real-time control of the tank level.

Project 2: Separating station

This project replicates the Separating Station (https://docs.factoryio.com/manual/scenes/separating-station/) scene from Factory IO, where boxes are sorted based on their color. The system controls:

  • Entry Conveyors (1 and 2): Move boxes to the sorting area.

  • Exit Conveyors (1 and 2): Transport sorted boxes to their assigned lanes.

  • Pushers: Shift boxes between conveyors according to their color.

Each exit lane is assigned a specific color, and the pushers activate only when conditions are met (e.g., box color and conveyor status). Additionally, the system includes:

  • Counters: Track the number of boxes sorted per lane.

  • Safety System: Includes an emergency stop and fault indicators.

  • HMI Interface: Allows control and monitoring of the process, with options for automatic, manual, and emergency modes.

  • Beacon Lights: Display system status using colored signals.

Challenges in Programming

The logic for this system is complex due to multiple conditions for activating conveyors and pushers. It involves:

  • Edge Detection: Using rising and falling edges to trigger precise actions.

  • Memory Bits: To manage permissions for equipment.

  • State Machines: Implementing states for start, stop, auto, manual, and emergency modes to ensure smooth and safe operation.

This project successfully reproduces the final effect shown in the official Factory IO video while adding enhancements for safety and user control via the HMI.

a close up of rocks and water on a beach

Exploring Computer Vision

A few months ago, I decided to dive into the world of computer vision using Python, exploring its potential for both personal projects and real-world applications. This journey has been an exciting learning experience, filled with challenges, discoveries, and growth.

Tools and Libraries Used

Throughout my journey, I’ve utilized several powerful Python libraries and frameworks to bring my ideas to life:

  • OpenCV: A versatile library for image and video processing.

  • MediaPipe: A framework for building perception pipelines, ideal for hand tracking and gesture recognition.

  • YOLO (You Only Look Once): A state-of-the-art object detection system that processes images in real-time.

These Python-based tools, combined with various online resources, enabled me to build innovative solutions while deepening my understanding of computer vision.

What I’ve Built

Here are some of the projects I’ve developed using these libraries:

  1. Facial Recognition and Tracking

    • Developed systems that can detect and track faces in real time, leveraging OpenCV and YOLO.

  2. Gesture-Based Microcontroller Control

    • Implemented a system where hand gestures control a microcontroller, enabling interactive hardware manipulation.

  3. Object Detection

    • Created applications that identify and classify objects in video streams using YOLO's pre-trained models and fine-tuned them for custom datasets.

  4. Gesture-Based Drawing

    • Designed a program that lets users draw in a virtual environment by tracking hand movements with MediaPipe.

The Process

Most of these projects were built through self-learning, tackling one challenge at a time. I focused on not just running pre-trained models but also understanding their inner workings. This helped me modify their behavior, optimize performance, and create custom solutions tailored to each project.

I also organized my code into reusable classes to keep it clean and maintainable, ensuring that future projects build upon a solid foundation.

a large body of water surrounded by mountains

Smart Robot car series

This series of projects aims to expand my knowledge in programming and mechatronic control systems using the Smart Robot Car V4.0 Kit (with camera). The robot is equipped with sensors and a camera, enabling autonomous navigation and basic tasks. I’ll also explore computer vision to enhance its capabilities, such as object detection and line tracking. These projects will help me gain practical experience in robotics, control systems, and computer vision.

Smart Robot Car Project 1: Basic Navigation

  • The goal is to make the robot move 1 meter, perform a 180-degree turn, and return to the starting position.

    1. Motor Control Setup:

      • The robot uses two pins for controlling the left and right wheels, and two additional pins for controlling PWM on each side.

      • Set the PWM for forward movement to 50 and for turning to 100.

    2. Speed Calibration (Forward Movement):

      • Calculate the time needed to travel 1 meter at a specific PWM using the formula: Time=DistanceSpeed\Speed where Distance = 1 meter, and Speed is determined based on the PWM value (calibrated experimentally).

    3. Turn Calibration:

      • Calibrate the 180-degree turn by adjusting the angular speed and turn time.

      • Use the formula for angular velocity: ω=θ\t where ω is the angular velocity, θ is the turn angle (180 degrees), and t is the time required for the turn.

    4. Code Implementation:

      • Insert the calculated time for both forward movement and turning (based on PWM values) into the code to make the robot navigate the path and return to the initial position.

Smart Robot Car Project 2 : Avoid colision

  • This project demonstrates basic obstacle avoidance using an ultrasonic sensor. The robot measures distances in three directions—front, left, and right—and decides the best path to avoid obstacles.

    • Logic and Process

      1. Distance Measurement:
        The ultrasonic sensor calculates distances by emitting sound waves and measuring the echo time.

      2. Decision-Making:

        • If the front is clear (>20 cm), the robot moves forward.

        • If not, it checks left and right:

          • Turns toward the side with more space.

          • If both are blocked, it reverses and turns slightly.

      3. Limitations:

        • Single sensor introduces delays as it rotates to measure.

        • Ultrasonic sensors can be inaccurate due to surface types or noise.

        • The logic assumes static obstacles, so it struggles with moving ones.

    • Improvements

      • Use multiple sensors or replace with LiDAR for faster, more accurate readings.

      • Average multiple measurements for better reliability.

      • Add mapping to avoid backtracking.

    • Conclusion

      While simple, this project showcases key concepts in robotics and offers a solid foundation for enhancements like better sensors or smarter algorithms. It's a great example of how robots can navigate the world!

two bales of hay sitting in the middle of a field

Hardware and PCB design

This series of projects aims to expand my knowledge in programming and mechatronic control systems using the Smart Robot Car V4.0 Kit (with camera). The robot is equipped with sensors and a camera, enabling autonomous navigation and basic tasks. I’ll also explore computer vision to enhance its capabilities, such as object detection and line tracking. These projects will help me gain practical experience in robotics, control systems, and computer vision.