Search results for: discrete automation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1094

Search results for: discrete automation

254 Design and Implementation of Smart Watch Textile Antenna for Wi-Fi Bio-Medical Applications in Millimetric Wave Band

Authors: M. G. Ghanem, A. M. M. A. Allam, Diaa E. Fawzy, Mehmet Faruk Cengiz

Abstract:

This paper is devoted to the design and implementation of a smartwatch textile antenna for Wi-Fi bio-medical applications in millimetric wave bands. The antenna is implemented on a leather textile-based substrate to be embedded in a smartwatch. It enables the watch to pick Wi-Fi signals without the need to be connected to a mobile through Bluetooth. It operates at 60 GHz or WiGig (Wireless Gigabit Alliance) band with a wide band for higher rate applications. It also could be implemented over many stratified layers of the body organisms to be used in the diagnosis of many diseases like diabetes and cancer. The structure is designed and simulated using CST (Studio Suite) program. The wearable patch antenna has an octagon shape, and it is implemented on leather material that acts as a flexible substrate with a size of 5.632 x 6.4 x 2 mm3, a relative permittivity of 2.95, and a loss tangent of 0.006. The feeding is carried out using differential feed (discrete port in CST). The work provides five antenna implementations; antenna without ground, a ground is added at the back of the antenna in order to increase the antenna gain, the substrate dimensions are increased to 15 x 30 mm2 to resemble the real hand watch size, layers of skin and fat are added under the ground of the antenna to study the effect of human body tissues human on the antenna performance. Finally, the whole structure is bent. It is found that the antenna can achieve a simulated peak realized gain in dB of 5.68, 7.28, 6.15, 3.03, and 4.37 for antenna without ground, antenna with the ground, antenna with larger substrate dimensions, antenna with skin and fat, and bent structure, respectively. The antenna with ground exhibits high gain; while adding the human organisms absorption, the gain is degraded because of human absorption. The bent structure contributes to higher gain.

Keywords: bio medical engineering, millimetric wave, smart watch, textile antennas, Wi-Fi

Procedia PDF Downloads 87
253 The 'Plain Style' in the Theory and Practice of Project Design: Contributions to the Shaping of an Urban Image on the Waterfront Prior to the 1755 Earthquake

Authors: Armenio Lopes, Carlos Ferreira

Abstract:

In the specific context of the Iberian Union between 1580 and 1640, characteristics emerged in Portuguese architecture that stood out from the main architectural production of the period. Recognised and identified aspects that had begun making their appearance decades before (1521) became significantly more marked during the Hapsburg-Spanish occupation. Distinctive even from the imperialist language of Spain, this trend would endure even after the restoration of independence (1706), continuing through to the start of the age of absolutism. Or perhaps not. This trend, recognised as Plain Style (Kubler), associated with a certain scarcity of resources, involved a certain formal and decorative simplification, as well as a particular set of conventions that would subsequently mark the landscape. This expression could also be seen as a means of asserting a certain spirit of independence as the Iberian Union breathed its last. The image of a simple, bare-bones architecture with purer design lines is associated by various authors –most notably Kubler– with the narratives of modernism, to whose principles it is similar, in a context-specific to the period. There is a contrast with some of the exuberance of the baroque or its expression in the Manueline period, in a similar fashion to modernism's responses to nineteenth-century eclecticism. This assertion and practice of simple architecture, drafted from the interpretation of the treaties, and highlighting a certain classical inspiration, was to become a benchmark in the theory of architecture, spanning the Baroque and Mannerism, until achieving contemporary recognition within certain originality and modernity. At a time when the baroque and its scenography became generally very widespread, it is important also to recognise the role played by plain style architecture in the construction of a rather complex and contradictory waterfront landscape, featuring promises of exuberance and more discrete practices.

Keywords: Carlos Mardel, Lisbon's waterfront, plain style, urban image on the waterfront

Procedia PDF Downloads 110
252 Volcanoscape Space Configuration Zoning Based on Disaster Mitigation by Utilizing GIS Platform in Mt. Krakatau Indonesia

Authors: Vega Erdiana Dwi Fransiska, Abyan Rai Fauzan Machmudin

Abstract:

Particularly, space configuration zoning is the very first juncture of a complete space configuration and region planning. Zoning is aimed to define discrete knowledge based on a local wisdom. Ancient predecessor scientifically study the sign of natural disaster towards ethnography approach by operating this knowledge. There are three main functions of space zoning, which are control function, guidance function, and additional function. The control function refers to an instrument for development control and as one of the essentials in controlling land use. Hence, the guidance function indicates as guidance for proposing operational planning and technical development or land usage. Any additional function is useful as a supplementary for region or province planning details. This phase likewise accredits to define boundary in an open space based on geographical appearance. Informant who is categorized as an elder lives in earthquake prone area, to be precise the area is the surrounding of Mount Krakatau. The collected data is one of method for analyzed with thematic model. Later on, it will be verified. In space zoning, long-range distance sensor is applied to determine visualization of the area, which will be zoned before the step of survey to validate the data. The data, which is obtained from long-range distance sensor and site survey, will be overlaid using GIS Platform. Comparing the knowledge based on a local wisdom that is well known by elderly in that area, some of it is relevant to the research, while the others are not. Based on the site survey, the interpretation of a long-range distance sensor, and determining space zoning by considering various aspects resulted in the pattern map of space zoning. This map can be integrated with disaster mitigation affected by volcano eruption.

Keywords: elderly, GIS platform, local wisdom, space zoning

Procedia PDF Downloads 230
251 Feasibility Study of Friction Stir Welding Application for Kevlar Material

Authors: Ahmet Taşan, Süha Tirkeş, Yavuz Öztürk, Zafer Bingül

Abstract:

Friction stir welding (FSW) is a joining process in the solid state, which eliminates problems associated with the material melting and solidification, such as cracks, residual stresses and distortions generated during conventional welding. Among the most important advantages of FSW are; easy automation, less distortion, lower residual stress and good mechanical properties in the joining region. FSW is a recent approach to metal joining and although originally intended for aluminum alloys, it is investigated in a variety of metallic materials. The basic concept of FSW is a rotating tool, made of non-consumable material, specially designed with a geometry consisting of a pin and a recess (shoulder). This tool is inserted as spinning on its axis at the adjoining edges of two sheets or plates to be joined and then it travels along the joining path line. The tool rotation axis defines an angle of inclination with which the components to be welded. This angle is used for receiving the material to be processed at the tool base and to promote the gradual forge effect imposed by the shoulder during the passage of the tool. This prevents the material plastic flow at the tool lateral, ensuring weld closure on the back of the pin. In this study, two 4 mm Kevlar® plates which were produced with the Kevlar® fabrics, are analyzed with COMSOL Multiphysics in order to investigate the weldability via FSW. Thereafter, some experimental investigation is done with an appropriate workbench in order to compare them with the analysis results.

Keywords: analytical modeling, composite materials welding, friction stir welding, heat generation

Procedia PDF Downloads 134
250 Brain-Computer Interface Based Real-Time Control of Fixed Wing and Multi-Rotor Unmanned Aerial Vehicles

Authors: Ravi Vishwanath, Saumya Kumaar, S. N. Omkar

Abstract:

Brain-computer interfacing (BCI) is a technology that is almost four decades old, and it was developed solely for the purpose of developing and enhancing the impact of neuroprosthetics. However, in the recent times, with the commercialization of non-invasive electroencephalogram (EEG) headsets, the technology has seen a wide variety of applications like home automation, wheelchair control, vehicle steering, etc. One of the latest developed applications is the mind-controlled quadrotor unmanned aerial vehicle. These applications, however, do not require a very high-speed response and give satisfactory results when standard classification methods like Support Vector Machine (SVM) and Multi-Layer Perceptron (MLPC). Issues are faced when there is a requirement for high-speed control in the case of fixed-wing unmanned aerial vehicles where such methods are rendered unreliable due to the low speed of classification. Such an application requires the system to classify data at high speeds in order to retain the controllability of the vehicle. This paper proposes a novel method of classification which uses a combination of Common Spatial Paradigm and Linear Discriminant Analysis that provides an improved classification accuracy in real time. A non-linear SVM based classification technique has also been discussed. Further, this paper discusses the implementation of the proposed method on a fixed-wing and VTOL unmanned aerial vehicles.

Keywords: brain-computer interface, classification, machine learning, unmanned aerial vehicles

Procedia PDF Downloads 256
249 Evaluation of SCS-Curve Numbers and Runoff across Varied Tillage Methods

Authors: Umar Javed, Kristen Blann, Philip Adalikwu, Maryam Sahraei, John McMaine

Abstract:

The soil conservation service curve number (SCS-CN) is a widely used method to assess direct runoff depth based on specific rainfall events. “Actual” estimated runoff depth was estimated by subtracting the change in soil moisture from the depth of precipitation for each discrete rain event during the growing seasons from 2021 to 2023. Fields under investigation were situated in a HUC-12 watershed in southeastern South Dakota selected for a common soil series (Nora-Crofton complex and Moody-Nora complex) to minimize the influence of soil texture on soil moisture. Two soil moisture probes were installed from May 2021 to October 2023, with exceptions during planting and harvest periods. For each field, “Textbook” CN estimates were derived from the TR-55 table based on corresponding mapped land use land cover LULC class and hydrologic soil groups from web soil survey maps. The TR-55 method incorporated HSG and crop rotation within the study area fields. These textbook values were then compared to actual CN values to determine the impact of tillage practices on CN and runoff. Most fields were mapped as having a textbook C or D HSG, but the HSG of actual CNs was that of a B or C hydrologic group. Actual CNs were consistently lower than textbook CNs for all management practices, but actual CNs in conventionally tilled fields were the highest (and closest to textbook CNs), while actual CNs in no-till fields were the lowest. Preliminary results suggest that no-till practice reduces runoff compared to conventional till. This research highlights the need to use CNs that incorporate agricultural management to more accurately estimate runoff at the field and watershed scale.

Keywords: curve number hydrology, hydrologic soil groups, runoff, tillage practices

Procedia PDF Downloads 21
248 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing

Procedia PDF Downloads 363
247 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 149
246 Design Considerations for the Construction of an Open Decontamination Facility for Managing Civil Emergencies

Authors: Sarmin, S., Ologuin, R.S.

Abstract:

Background: Rapid population growth and land constraints in Singapore results in a possible situation in which we face a higher number of casualties and lack of operational space in healthcare facilities during disasters and HAZMAT events, collectively known as Civil Emergencies. This creates a need for available working space within hospital grounds to be amphibious or multi-functional, to ensure the institution’s capability to respond efficiently to Civil Emergencies. The Emergency Department (ED) mitigates this issue by converting the Ambulance Assembly Area used during peacetime into an Open Decontamination Facility (ODF) during Civil Emergency Response, for decontamination of casualties before they proceed to treatment areas into Ambulance Assembly Area used during peacetime. Aims: To effectively operationalize the Open Decontamination Facility (ODF) through the reduction of manual handling. Methods: From past experiences on Civil Emergency exercises, it was labor-intensive for staff to set up the Open Decontamination Facility (ODF). Manual handling to set up the Decontamination lanes by bringing down the curtains and supply of water was required to be turned on. Conclusion: The effectiveness of the design construction of an Open Decontamination Facility (ODF) is based on the use of automation of bringing down the curtains on the various lanes. The use of control panels for water supply to decontaminate patients. Safety within the ODF was considered with the installation of panic buttons, intercom for staff communication, and perimeter curtains were installed with stability arm to manage the condition with high wind velocity.

Keywords: civil emergencies, disaster, emergency department, Hazmat

Procedia PDF Downloads 76
245 Four-Way Coupled CFD-Dem Simulation of Concrete Pipe Flow Using a Non-Newtonian Rheological Model: Investigating the Simulation of Lubrication Layer Formation and Plug Flow Zones

Authors: Tooran Tavangar, Masoud Hosseinpoor, Jeffrey S. Marshall, Ammar Yahia, Kamal Henri Khayat

Abstract:

In this study, a four-way coupled CFD-DEM methodology was used to simulate the behavior of concrete pipe flow. Fresh concrete, characterized as a biphasic suspension, features aggregates comprising the solid-suspended phase with diverse particle-size distributions (PSD) within a non-Newtonian cement paste/mortar matrix forming the liquid phase. The fluid phase was simulated using CFD, while the aggregates were modeled using DEM. Interaction forces between the fluid and solid particles were considered through CFD-DEM computations. To capture the viscoelastic characteristics of the suspending fluid, a bi-viscous approach was adopted, incorporating a critical shear rate proportional to the yield stress of the mortar. In total, three diphasic suspensions were simulated, each featuring distinct particle size distributions and a concentration of 10% for five subclasses of spherical particles ranging from 1 to 17 mm in a suspending fluid. The adopted bi-viscous approach successfully simulated both un-sheared (plug flow) and sheared zones. Furthermore, shear-induced particle migration (SIPM) was assessed by examining coefficients of variation in particle concentration across the pipe. These SIPM values were then compared with results obtained using CFD-DEM under the Newtonian assumption. The study highlighted the crucial role of yield stress in the mortar phase, revealing that lower yield stress values can lead to increased flow rates and higher SIPM across the pipe.

Keywords: computational fluid dynamics, concrete pumping, coupled CFD-DEM, discrete element method, plug flow, shear-induced particle migration.

Procedia PDF Downloads 39
244 Implementation of a Paraconsistent-Fuzzy Digital PID Controller in a Level Control Process

Authors: H. M. Côrtes, J. I. Da Silva Filho, M. F. Blos, B. S. Zanon

Abstract:

In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.

Keywords: fuzzy logic, paraconsistent annotated logic, level control, digital PID

Procedia PDF Downloads 260
243 Effect of Nanoparticle Addition in the Urea-Formaldehyde Resin on the Formaldehyde Emission from MDF

Authors: Sezen Gurdag, Ayse Ebru Akin

Abstract:

There is a growing concern all over the world on the health effect of the formaldehyde emission coming from the adhesive used in the MDF production. In this research, we investigated the effect of nanoparticle addition such as nanoclay and halloysite into urea-formadehyde resin on the total emitted formaldehyde from MDF plates produced using the resin modified as such. First, the curing behavior of the resin was studied by monitoring the pH, curing time, solid content, density and viscosity of the modified resin in comparison to the reference resin with no added nanoparticle. The dosing of the nanoparticle in the dry resin was kept at 1wt%, 3wt% or 5wt%. Consecutively, the resin was used in the production of 50X50 cm MDF samples using laboratory scale press line with full automation system. Modulus of elasticity, bending strength, internal bonding strength, water absorption were also measured in addition to the main interested parameter formaldehyde emission levels which is determined via spectrometric technique following an extraction procedure. Threshold values for nanoparticle dosing levels were determined to be 5wt% for both nanoparticles. However, the reinforcing behavior was observed to be occurring at different levels in comparison to the reference plates with each nanoparticle such that the level of reinforcement with nanoclay was shown to be more favorable than the addition of halloysite due to higher surface area available with the former. In relation, formaldehyde emission levels were observed to be following a similar trend where addition of 5wt% nanoclay into the urea-formaldehyde adhesive helped decrease the formaldehyde emission up to 40% whereas addition of halloysite at its threshold level demonstrated as the same level, i.e., 5wt%, produced an improvement of 18% only.

Keywords: halloysite, nanoclay, fiberboard, urea-formaldehyde adhesive

Procedia PDF Downloads 130
242 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 278
241 Post Covid-19 Landscape of Global Pharmaceutical Industry

Authors: Abu Zafor Sadek

Abstract:

Pharmaceuticals were one of the least impacted business sectors during the corona pandemic as they are the center point of Covid-19 fight. Emergency use authorization, unproven indication of some commonly used drugs, self-medication, research and production capacity of an individual country, capacity of producing vaccine by many countries, Active Pharmaceutical Ingredients (APIs) related uncertainty, information gap among manufacturer, practitioners and user, export restriction, duration of lock-down, lack of harmony in transportation, disruption in the regulatory approval process, sudden increased demand of hospital items and protective equipment, panic buying, difficulties in in-person product promotion, e-prescription, geo-politics and associated issues added a new dimension to this industry. Although the industry maintains a reasonable growth throughout Covid-19 days; however, it has been characterized by both long- and short-term effects. Short-term effects have already been visible to so many countries, especially those who are import-dependent and have limited research capacity. On the other hand, it will take a few more time to see the long-term effects. Nevertheless, supply chain disruption, changes in strategic planning, new communication model, squeezing of job opportunity, rapid digitalization are the major short-term effects, whereas long-term effects include a shift towards self-sufficiency, growth pattern changes of certain products, special attention towards clinical studies, automation in operations, the increased arena of ethical issues etc. Therefore, this qualitative and exploratory study identifies the post-covid-19 landscape of the global pharmaceutical industry.

Keywords: covid-19, pharmaceutical, businees, landscape

Procedia PDF Downloads 75
240 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 487
239 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 489
238 Effect of Hybrid Fibers on Mechanical Properties in Autoclaved Aerated Concrete

Authors: B. Vijay Antony Raj, Umarani Gunasekaran, R. Thiru Kumara Raja Vallaban

Abstract:

Fibrous autoclaved aerated concrete (FAAC) is concrete containing fibrous material in it which helps to increase its structural integrity when compared to that of convention autoclaved aerated concrete (CAAC). These short discrete fibers are uniformly distributed and randomly oriented, which enhances the bond strength within the aerated concrete matrix. Conventional red-clay bricks create larger impact to the environment due to red soil depletion and it also consumes large amount to time for construction. Whereas, AAC are larger in size, lighter in weight and it is environmentally friendly in nature and hence it is a viable replacement for red-clay bricks. Internal micro cracks and corner cracks are the only disadvantages of conventional autoclaved aerated concrete, to resolve this particular issue it is preferable to make use of fibers in it.These fibers are bonded together within the matrix and they induce the aerated concrete to withstand considerable stresses, especially during the post cracking stage. Hence, FAAC has the capability of enhancing the mechanical properties and energy absorption capacity of CAAC. In this research work, individual fibers like glass, nylon, polyester and polypropylene are used they generally reduce the brittle fracture of AAC.To study the fibre’s surface topography and composition, SEM analysis is performed and then to determine the composition of a specimen as a whole as well as the composition of individual components EDAX mapping is carried out and then an experimental approach was performed to determine the effect of hybrid (multiple) fibres at various dosage (0.5%, 1%, 1.5%) and curing temperature of 180-2000 C is maintained to determine the mechanical properties of autoclaved aerated concrete. As an analytical part, the outcome experimental results is compared with fuzzy logic using MATLAB.

Keywords: fiberous AAC, crack control, energy absorption, mechanical properies, SEM, EDAX, MATLAB

Procedia PDF Downloads 245
237 Opportunities for Precision Feed in Apiculture

Authors: John Michael Russo

Abstract:

Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.

Keywords: precision agriculture, precision feed, apiculture, honeybees

Procedia PDF Downloads 54
236 A New Technology for Metformin Hydrochloride Mucoadhesive Microparticles Preparation Utilizing BÜCHI Nano-Spray Dryer B-90

Authors: Tamer M. Shehata

Abstract:

Objective: Currently, mucoadhesive microparticles acquired a high interest in both research and pharmaceutical technology fields. Recently, BÜCHI lunched its latest fourth generation nano spray dryer B-90 used for nanoparticle production. B-90 offers an elegant technology combined particle engineering and drying in one step. In our laboratory, we successfully developed a new formulation for metformin hydrochloride, mucoadhesive microparticles utilizing B-90 technology for treatment of type 2-diabetis. Method: Gelatin or sodium alginate, natural occurring polymers with mucoadhesive properties, solely or in combination was used in our formulation trials. Preformulation studies (atomization head mesh size, flow rate, head temperature, polymer solution viscosity and surface tension) and postformulation characters (particle size, flowability, surface scan and dissolution profile) were evaluated. Finally, hypoglycemic effect of the selected formula was evaluated in streptozotocin-induced diabetic rats. Spray head with 7 µm hole, flow rate of 3.5 mL/min and head temperature 120 ºC were selected. Polymer viscosity was less than 11.5 cP with surface tension less than 70.1 dyne/cm. Result: Discrete, non aggregated particles and free flowing powders with particle size was less than 2000 nm were obtained. Gelatin and sodium alginate combination in ratio 1:3 were successfully sustained the in vitro release profile of the drug. Hypoglycemic evaluation of the previous formula, showed a significant reduction of blood glucose level over 24 h. Conclusion: B-90 technology can open a new era of , mucoadhesive microparticles preparation offering convenient dosage form that can enhance compliance of type 2 diabetic patients.

Keywords: mucoadhesive, microparticles, technology, diabetis

Procedia PDF Downloads 269
235 Designing Electronic Kanban in Assembly Line Tailboom at XYZ Corp to Reducing Lead Time

Authors: Nadhifah A. Nugraha, Dida D. Damayanti, Widia Juliani

Abstract:

Airplanes manufacturing is growing along with the increasing demand from consumers. The helicopter's tail called Tailboom is a product of the helicopter division at XYZ Corp, where the Tailboom assembly line is a pull system. Based on observations of existing conditions that occur at XYZ Corp, production is still unable to meet the demands of consumers; lead time occurs greater than the plan agreed upon by the consumers. In the assembly process, each work station experiences a lack of parts and components needed to assemble components. This happens because of the delay in getting the required part information, and there is no warning about the availability of parts needed, it makes some parts unavailable in assembly warehouse. The lack of parts and components from the previous work station causes the assembly process to stop, and the assembly line also stops at the next station. In its completion, the production time was late and not on the schedule. In resolving these problems, the controlling process is needed, which is controlling the assembly line to get all components and subassembly in the right amount and at the right time. This study applies one of Just In Time tools, namely Kanban and automation, should be added as efficiently and effectively communication line becomes electronic Kanban. The problem can be solved by reducing non-value added time, such as waiting time and idle time. The proposed results of controlling the assembly line of Tailboom result in a smooth assembly line without waiting, reduced lead time, and achieving production time according to the schedule agreed with the consumers.

Keywords: kanban, e-Kanban, lead time, pull system

Procedia PDF Downloads 83
234 A Computationally Intelligent Framework to Support Youth Mental Health in Australia

Authors: Nathaniel Carpenter

Abstract:

Web-enabled systems for supporting youth mental health management in Australia are pioneering in their field; however, with their success, these systems are experiencing exponential growth in demand which is straining an already stretched service. Supporting youth mental is critical as the lack of support is associated with significant and lasting negative consequences. To meet this growing demand, and provide critical support, investigations are needed on evaluating and improving existing online support services. Improvements should focus on developing frameworks capable of augmenting and scaling service provisions. There are few investigations informing best-practice frameworks when implementing e-mental health support systems for youth mental health; there are fewer which implement machine learning or artificially intelligent systems to facilitate the delivering of services. This investigation will use a case study methodology to highlight the design features which are important for systems to enable young people to self-manage their mental health. The investigation will also highlight the current information system challenges, to include challenges associated with service quality, provisioning, and scaling. This work will propose methods of meeting these challenges through improved design, service augmentation and automation, service quality, and through artificially intelligent inspired solutions. The results of this study will inform a framework for supporting youth mental health with intelligent and scalable web-enabled technologies to support an ever-growing user base.

Keywords: artificial intelligence, information systems, machine learning, youth mental health

Procedia PDF Downloads 89
233 Numerical Method for Productivity Prediction of Water-Producing Gas Well with Complex 3D Fractures: Case Study of Xujiahe Gas Well in Sichuan Basin

Authors: Hong Li, Haiyang Yu, Shiqing Cheng, Nai Cao, Zhiliang Shi

Abstract:

Unconventional resources have gradually become the main direction for oil and gas exploration and development. However, the productivity of gas wells, the level of water production, and the seepage law in tight fractured gas reservoirs are very different. These are the reasons why production prediction is so difficult. Firstly, a three-dimensional multi-scale fracture and multiphase mathematical model based on an embedded discrete fracture model (EDFM) is established. And the material balance method is used to calculate the water body multiple according to the production performance characteristics of water-producing gas well. This will help construct a 'virtual water body'. Based on these, this paper presents a numerical simulation process that can adapt to different production modes of gas wells. The research results show that fractures have a double-sided effect. The positive side is that it can increase the initial production capacity, but the negative side is that it can connect to the water body, which will lead to the gas production drop and the water production rise both rapidly, showing a 'scissor-like' characteristic. It is worth noting that fractures with different angles have different abilities to connect with the water body. The higher the angle of gas well development, the earlier the water maybe break through. When the reservoir is a single layer, there may be a stable production period without water before the fractures connect with the water body. Once connected, a 'scissors shape' will appear. If the reservoir has multiple layers, the gas and water will produce at the same time. The above gas-water relationship can be matched with the gas well production date of the Xujiahe gas reservoir in the Sichuan Basin. This method is used to predict the productivity of a well with hydraulic fractures in this gas reservoir, and the prediction results are in agreement with on-site production data by more than 90%. It shows that this research idea has great potential in the productivity prediction of water-producing gas wells. Early prediction results are of great significance to guide the design of development plans.

Keywords: EDFM, multiphase, multilayer, water body

Procedia PDF Downloads 168
232 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 69
231 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization

Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi

Abstract:

Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.

Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm

Procedia PDF Downloads 50
230 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells

Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani

Abstract:

Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.

Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade

Procedia PDF Downloads 398
229 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning

Authors: Jean Berger, Mohamed Barkaoui

Abstract:

Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.

Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm

Procedia PDF Downloads 336
228 The Influence of Travel Experience within Perceived Public Transport Quality

Authors: Armando Cartenì, Ilaria Henke

Abstract:

The perceived public transport quality is an important driver that influences both customer satisfaction and mobility choices. The competition among transport operators needs to improve the quality of the services and identify which attributes are perceived as relevant by passengers. Among the “traditional” public transport quality attributes there are, for example: travel and waiting time, regularity of the services, and ticket price. By contrast, there are some “non-conventional” attributes that could significantly influence customer satisfaction jointly with the “traditional” ones. Among these, the beauty/aesthetics of the transport terminals (e.g. rail station and bus terminal) is probably one of the most impacting on user perception. Starting from these considerations, the point stressed in this paper was if (and how munch) the travel experience of the overall travel (e.g. how long is the travel, how many transport modes must be used) influences the perception of the public transport quality. The aim of this paper was to investigate the weight of the terminal quality (e.g. aesthetic, comfort and service offered) within the overall travel experience. The case study was the extra-urban Italian bus network. The passengers of the major Italian terminal bus were interviewed and the analysis of the results shows that about the 75% of the travelers, are available to pay up to 30% more for the ticket price for having a high quality terminal. A travel experience effect was observed: the average perceived transport quality varies with the characteristic of the overall trip. The passengers that have a “long trip” (travel time greater than 2 hours) perceived as “low” the overall quality of the trip even if they pass through a high quality terminal. The opposite occurs for the “short trip” passengers. This means that if a traveler passes through a high quality station, the overall perception of that terminal could be significantly reduced if he is tired from a long trip. This result is important and if confirmed through other case studies, will allow to conclude that the “travel experience impact" must be considered as an explicit design variable for public transport services and planning.

Keywords: transportation planning, sustainable mobility, decision support system, discrete choice model, design problem

Procedia PDF Downloads 271
227 Formulation and Evaluation of Metformin Hydrochloride Microparticles via BÜCHI Nano-Spray Dryer B-90

Authors: Tamer Shehata

Abstract:

Recently, nanotechnology acquired a great interest in the field of pharmaceutical production. Several pharmaceutical equipment were introduced into the research field for production of nanoparticles, among them, BÜCHI’ fourth generation nano-spray dryer B-90. B-90 is specialized with single step of production and drying of nano and microparticles. Currently, our research group is investigating several pharmaceutical formulations utilizing BÜCHI Nano-Spray Dryer B-90 technology. One of our projects is the formulation and evaluation of metformin hydrochloride mucoadhesive microparticles for treatment of type 2-diabetis. Several polymers were investigated, among them, gelatin and sodium alginate. The previous polymers are natural polymers with mucoadhesive properties. Preformulation studies such as atomization head mesh size, flow rate, head temperature, polymer solution viscosity and surface tension were performed. Postformulation characters such as particle size, flowability, surface scan and dissolution profile were evaluated. Finally, the pharmacological activity of certain selected formula was evaluated in streptozotocin-induced diabetic rats. B-90’spray head was 7 µm hole heated to 120 with air flow rate 3.5 mL/min. The viscosity of the solution was less than 11.5 cP with surface tension less than 70.1 dyne/cm. Successfully, discrete, non-aggregated particles and free flowing powders with particle size was less than 2000 nm were obtained. Gelatin and Sodium alginate combination in ratio 1:3 were successfully sustained the in vitro release profile of the drug. Hypoglycemic evaluation of the previous formula showed a significant reduction of blood glucose level over 24 h. In conclusion, mucoadhesive metformin hydrochloride microparticles obtained from B-90 could offer a convenient dosage form with enhanced hypoglycemic activity.

Keywords: mucoadhesive, microparticles, metformin hydrochloride, nano-spray dryer

Procedia PDF Downloads 286
226 Designing of Tooling Solution for Material Handling in Highly Automated Manufacturing System

Authors: Muhammad Umair, Yuri Nikolaev, Denis Artemov, Ighor Uzhinsky

Abstract:

A flexible manufacturing system is an integral part of a smart factory of industry 4.0 in which every machine is interconnected and works autonomously. Robots are in the process of replacing humans in every industrial sector. As the cyber-physical-system (CPS) and artificial intelligence (AI) are advancing, the manufacturing industry is getting more dependent on computers than human brains. This modernization has boosted the production with high quality and accuracy and shifted from classic production to smart manufacturing systems. However, material handling for such automated productions is a challenge and needs to be addressed with the best possible solution. Conventional clamping systems are designed for manual work and not suitable for highly automated production systems. Researchers and engineers are trying to find the most economical solution for loading/unloading and transportation workpieces from a warehouse to a machine shop for machining operations and back to the warehouse without human involvement. This work aims to propose an advanced multi-shape tooling solution for highly automated manufacturing systems. The currently obtained result shows that it could function well with automated guided vehicles (AGVs) and modern conveyor belts. The proposed solution is following requirements to be automation-friendly, universal for different part geometry and production operations. We used a bottom-up approach in this work, starting with studying different case scenarios and their limitations and finishing with the general solution.

Keywords: artificial intelligence, cyber physics system, Industry 4.0, material handling, smart factory, flexible manufacturing system

Procedia PDF Downloads 112
225 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks

Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann

Abstract:

This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.

Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem

Procedia PDF Downloads 103