Search results for: covering machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3486

Search results for: covering machine

366 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 208
365 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 185
364 Deep Cryogenic Treatment With Subsequent Aging Applied to Martensitic Stainless Steel: Evaluation of Hardness, Tenacity and Microstructure

Authors: Victor Manuel Alcántara Alza

Abstract:

The way in which the application of the deep cryogenic treatment DCT(-196°C) affects, applied with subsequent aging, was investigated, regarding the mechanical properties of hardness, toughness and microstructure, applied to martensitic stainless steels, with the aim of establishing a different methodology compared to the traditional DCT cryogenic treatment with subsequent tempering. For this experimental study, a muffle furnace was used, first subjecting the specimens to deep cryogenization in a liquid Nitrogen bath/4h, after being previously austenitized at the following temperatures: 1020-1030-1040-1050 (°C) / 1 hour; and then tempered in oil. A first group of cryogenic samples were subjected to subsequent aging at 150°C, with immersion times: 2.5 -5- 10 - 20 - 50 – 100 (h). The next group was subjected to subsequent tempering at temperatures: 480-500-510-520-530-540 (°C)/ 2h. The hardness tests were carried out under standards, using a Universal Durometer, and the readings were made on the HRC scale. The Impact Resistance tests were carried out in a Charpy machine following the ASTM E 23 – 93ª standard. Measurements were taken in joules. Microscopy was performed at the optical level using a 1000X microscope. It was found: For the entire aging interval, the samples austenitized at 1050°C present greater hardness than austenitized at 1040°C, with the maximum peak aged being at 30h. In all cases, the aged samples exceed the hardness of the tempered samples, even in their minimum values. In post-tempered samples, the tempering temperature hardly have effect on the impact strength of material. In the Cryogenic Treatment: DCT + subsequent aging, the maximum hardness value (58.7 HRC) is linked to an impact toughness value (54J) obtained with aging time of 39h, which is considered an optimal condition. The higher hardness of steel after the DCT treatment is attributed to the transformation of retained austenite into martensite. The microstructure is composed mainly of lath martensite; and the original grain size of the austenite can be appreciated. The choice of the combination: Hardness-toughness, is subject to the required service conditions of steel.

Keywords: deep cryogenic treatment; aged precipitation; martensitic steels;, mechanical properties; martensitic steels, hardness, carbides precipitaion

Procedia PDF Downloads 74
363 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 280
362 Interfacial Adhesion and Properties Improvement of Polyethylene/Thermoplastic Starch Blend Compatibilized by Stearic Acid-Grafted-Starch

Authors: Nattaporn Khanoonkon, Rangrong Yoksan, Amod A. Ogale

Abstract:

Polyethylene (PE) is one of the most petroleum-based thermoplastic materials used in many applications including packaging due to its cheap, light-weight, chemically inert and capable to be converted into various shapes and sizes of products. Although PE is a commercially potential material, its non-biodegradability caused environmental problems. At present, bio-based polymers become more interesting owing to its bio-degradability, non-toxicity, and renewability as well as being eco-friendly. Thermoplastic starch (TPS) is a bio-based and biodegradable plastic produced from the plasticization of starch under applying heat and shear force. In many researches, TPS was blended with petroleum-based polymers including PE in order to reduce the cost and the use of those polymers. However, the phase separation between hydrophobic PE and hydrophilic TPS limited the amount of TPS incorporated. The immiscibility of two different polarity polymers can be diminished by adding compatibilizer. PE-based compatibilizers, e.g. polyethylene-grafted-maleic anhydride, polyethylene-co-vinyl alcohol, etc. have been applied for the PE/TPS blend system in order to improve their miscibility. Until now, there is no report about the utilization of starch-based compatibilizer for PE/TPS blend system. The aims of the present research were therefore to synthesize a new starch-based compatibilizer, i.e. stearic acid-grafted starch (SA-g-starch) and to study the effect of SA-g-starch on chemical interaction, morphological properties, tensile properties and water vapor as well as oxygen barrier properties of the PE/TPS blend films. PE/TPS blends without and with incorporating SA-g-starch with a content of 1, 3 and 5 part(s) per hundred parts of starch (phr) were prepared using a twin screw extruder and then blown into films using a film blowing machine. Incorporating 1 phr and 3 phr of SA-g-starch could improve miscibility of the two polymers as confirmed from the reduction of TPS phase size and the good dispersion of TPS phase in PE matrix. In addition, the blend containing SA-g-starch with contents of 1 phr and 3 phr exhibited higher tensile strength and extensibility, as well as lower water vapor and oxygen permeabilities than the naked blend. The above results suggested that SA-g-starch could be potentially applied as a compatibilizer for the PE/TPS blend system.

Keywords: blend, compatibilizer, polyethylene, thermoplastic starch

Procedia PDF Downloads 440
361 Austempered Compacted Graphite Irons: Influence of Austempering Temperature on Microstructure and Microscratch Behavior

Authors: Rohollah Ghasemi, Arvin Ghorbani

Abstract:

This study investigates the effect of austempering temperature on microstructure and scratch behavior of the austempered heat-treated compacted graphite irons. The as-cast was used as base material for heat treatment practices. The samples were extracted from as-cast ferritic CGI pieces and were heat treated under austenitising temperature of 900°C for 60 minutes which followed by quenching in salt-bath at different austempering temperatures of 275°C, 325°C and 375°C. For all heat treatments, an austempering holding time of 30 minutes was selected for this study. Light optical microscope (LOM) and scanning electron microscope (SEM) and electron back scattered diffraction (EBSD) analysis confirmed the ausferritic matrix formed in all heat-treated samples. Microscratches were performed under the load of 200, 600 and 1000 mN using a sphero-conical diamond indenter with a tip radius of 50 μm and induced cone angle 90° at a speed of 10 μm/s at room temperature ~25°C. An instrumented nanoindentation machine was used for performing nanoindentation hardness measurement and microscratch testing. Hardness measurements and scratch resistance showed a significant increase in Brinell, Vickers, and nanoindentation hardness values as well as microscratch resistance of the heat-treated samples compared to the as-cast ferritic sample. The increase in hardness and improvement in microscratch resistance are associated with the formation of the ausferrite matrix consisted of carbon-saturated retained austenite and acicular ferrite in austempered matrix. The maximum hardness was observed for samples austempered at 275°C which resulted in the formation of very fine acicular ferrite. In addition, nanohardness values showed a quite significant variation in the matrix due to the presence of acicular ferrite and carbon-saturated retained austenite. It was also observed that the increase of austempering temperature resulted in increase of volume of the carbon-saturated retained austenite and decrease of hardness values.

Keywords: austempered CGI, austempering, scratch testing, scratch plastic deformation, scratch hardness

Procedia PDF Downloads 136
360 On Cloud Computing: A Review of the Features

Authors: Assem Abdel Hamed Mousa

Abstract:

The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.

Keywords: internet, cloud computing, ubiquitous computing, big data

Procedia PDF Downloads 384
359 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 160
358 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data

Authors: Michelangelo Sofo, Giuseppe Labianca

Abstract:

In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.

Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm

Procedia PDF Downloads 26
357 “CheckPrivate”: Artificial Intelligence Powered Mobile Application to Enhance the Well-Being of Sextual Transmitted Diseases Patients in Sri Lanka under Cultural Barriers

Authors: Warnakulasuriya Arachichige Malisha Ann Rosary Fernando, Udalamatta Gamage Omila Chalanka Jinadasa, Bihini Pabasara Amandi Amarasinghe, Manul Thisuraka Mandalawatta, Uthpala Samarakoon, Manori Gamage

Abstract:

The surge in sexually transmitted diseases (STDs) has become a critical public health crisis demanding urgent attention and action. Like many other nations, Sri Lanka is grappling with a significant increase in STDs due to a lack of education and awareness regarding their dangers. Presently, the available applications for tracking and managing STDs cover only a limited number of easily detectable infections, resulting in a significant gap in effectively controlling their spread. To address this gap and combat the rising STD rates, it is essential to leverage technology and data. Employing technology to enhance the tracking and management of STDs is vital to prevent their further propagation and to enable early intervention and treatment. This requires adopting a comprehensive approach that involves raising public awareness about the perils of STDs, improving access to affordable healthcare services for early detection and treatment, and utilizing advanced technology and data analysis. The proposed mobile application aims to cater to a broad range of users, including STD patients, recovered individuals, and those unaware of their STD status. By harnessing cutting-edge technologies like image detection, symptom-based identification, prevention methods, doctor and clinic recommendations, and virtual counselor chat, the application offers a holistic approach to STD management. In conclusion, the escalating STD rates in Sri Lanka and across the globe require immediate action. The integration of technology-driven solutions, along with comprehensive education and healthcare accessibility, is the key to curbing the spread of STDs and promoting better overall public health.

Keywords: STD, machine learning, NLP, artificial intelligence

Procedia PDF Downloads 84
356 Determination of Community Based Reference Interval of Aspartate Aminotransferase to Platelet Ratio Index (APRI) among Healthy Populations in Mekelle City Tigray, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Aspartate aminotransferase to Platelet Ratio Index (APRI) currently becomes a biomarker for screening liver fibrosis since liver biopsy procedure is invasive and variation in pathological interpretation. Clinical Laboratory Standard Institute recommends establishing age, sex and environment specific reference interval for biomarkers in a homogenous population. The current study was aimed to derive community based reference interval of APRI aged between 12 and 60 years old in Mekelle city Tigrai, Northern Ethiopia. Method: Six hundred eighty eight study participants were collected from three districts in Mekelle city. The 3 districts were selected through random sampling technique and sample size to kebelles (small administration) were distributed proportional to household number in each district. Lottery method was used at household level if more than 2 study participants to each age partition were found. A community based cross sectional in a total of 534 study participants, 264 male and 270 females, were included in the final laboratory and data analysis but around 154 study participants were excluded through exclusion criteria. Aspartate aminotransferase was analyzed through Biosystem chemistry analyzer and Sysmix machine was used to analyze platelet. Man Whitney U test non parametric stastical tool was used to appreciate stastical difference among gender after excluding the outliers through Box and Whisker. Result: The study appreciated stastical difference among gender for APRI reference interval. The combined, male and female reference interval in the current study was 0.098-0.390, 0.133-0.428 and 0.090-0.319 respectively. The upper and lower reference interval of males was higher than females in all age partition and there was no stastical difference (p-value (<0.05)) between age partition. Conclusion: The current study showed using sex specific reference interval is significant to APRI biomarker in clinical practice for result interpretation.

Keywords: reference interval, aspartate aminotransferase to platelet ratio Index, Ethiopia, tigray

Procedia PDF Downloads 117
355 Machine That Provides Mineral Fertilizer Equal to the Soil on the Slopes

Authors: Huseyn Nuraddin Qurbanov

Abstract:

The reliable food supply of the population of the republic is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on the slopes, the application of equal amounts of mineral fertilizers the under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that, at present, there is a need to provide an equal amount of fertilizer on the slopes to under the soil, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize the under the soil, and unequal application of fertilizers under the soil on the slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, and taking into account the physical and mechanical properties of fertilizers is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the 8 partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.

Keywords: combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer

Procedia PDF Downloads 138
354 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 171
353 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 139
352 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments

Authors: Md Azizul Islam

Abstract:

Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.

Keywords: seam, seam efficiency, serviceability, T-shirt

Procedia PDF Downloads 203
351 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time

Procedia PDF Downloads 179
350 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”

Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid

Abstract:

Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.

Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps

Procedia PDF Downloads 478
349 Development of Bioplastic Disposable Food Packaging from Starch and Cellulose

Authors: Lidya Hailu, Ramesh Duraisamy, Masood Akhtar Khan, Belete Yilma

Abstract:

Disposable food packaging is a single-use plastics that can include any disposable plastic item which could be designed and use only once. In this context, this study aimed to prepare and evaluate bioplastic food packaging material from avocado seed starch and sugarcane bagasse cellulose and to characterise avocado seed starch. Performed the physicomechanical, structural, thermal properties, and biodegradability of raw materials and readily prepared bioplastic using the universal tensile testing machine, FTIR, UV-Vis spectroscopy, TGA, XRD, and SEM. Results have shown that an increasing amount of glycerol (3-5 mL) resulted in increases in water absorption, density, water vapor permeability, and elongation at the break of prepared bioplastic. However, it causes decreases in % transmittance, thermal degradation, and the tensile strength of prepared bioplastic. Likewise, the addition of cellulose fiber (0-15 %) increases % transmittance ranged (91.34±0.12-63.03±0.05 %), density (0.93±0.04-1.27±0.02 g/cm3), thermal degradation (310.01-321.61°C), tensile strength (2.91±6.18-4.21±6.713 MPa) of prepared bioplastic. On the other hand, it causes decreases in water absorption (14.4±0.25-9.40±0.007 %), water vapor permeability (9.306x10-12±0.3-3.57x10-12±0.15 g•s−1•m−1•Pa−1) and elongation at break (34.46±3.37-27.63±5.67 %) of prepared bioplastic. All the readily prepared bioplastic films rapidly degraded in the soil in the first 6 days and decompose within 12 days with a diminutive leftover and completely degraded within 15 days under an open soil atmosphere. Studied results showed starch derived bioplastic reinforced with 15 % cellulose fiber that plasticized with 3 mL of glycerol had improved results than other combinations of glycerol and bagasse cellulose with avocado seed starch. Thus, biodegradable disposable food packaging cup has been successfully produced in the lab-scale level using the studied approach. Biodegradable disposable food packaging materials have been successfully produced by employing avocado seed starch and sugarcane bagasse cellulose. The future study should be done on nano scale production since this study was done at the micro level.

Keywords: avocado seed, food packaging, glycerol, sugarcane bagasse

Procedia PDF Downloads 339
348 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 122
347 The Capacity of Bolted and Screw Connections in Cold-Formed Steel Truss Structure through Analytical and Experimental Method

Authors: Slamet Setioboro, Rahutami Kusumaningsih, Prabowo Setiyawan, Danna Darmayadi

Abstract:

Designing of cold-formed steel capacity connections often based on the formula used for hot rolled steel. It makes the result of the actual capacity connection doesn’t accurate anymore. When the hot rolled steel receives the axial load pull, it will have different characteristics. As the result, there will be failure result when designing Truss structure made of hot rolled steel. This research aims to determine the capacity of actual cold-formed steel connections section which is loaded by the axial tensile force. It will test the appeal of the connection using bolt grafting tool and screw grafting tool. The variations of the test will be on the type of connection (single and double slap), the number of the connection tools and connection configuration. Bold and screw connections failure mode observed in this research are different each other. Failure mode of bolted connections includes sliding pivot plate, tearing at the plate and cutting of the bolt head. While the failure mode of screw connections includes tilting, hole-bearing, pull over and cutting the screw body out. This research was conducted using a laboratory test of HW2-600S Universal Testing Machine model with ASTM E8. It has done in the materials testing laboratory of Mechanical Engineering Department, Faculty of Engineering UNNES. The results obtained through the laboratory diversification towards theoretical calculations using the standards specified in ISO 7971-2013 Cold-Rolled Steel Structures. Based on the research, it can be concluded that the effective connection in receiving force strength is bolted connections neither single nor double plate. The method used is by applying 4 bolts through 2 parallel lines configuration. Furthermore, this connection deals with the consequences of holding the highest Pmaks, lowest failure risk and getting a little kind of mode of failure.

Keywords: axial load, cold-formed steel, capacity connections, bolted connections, screw connections

Procedia PDF Downloads 276
346 Design Optimisation of a Novel Cross Vane Expander-Compressor Unit for Refrigeration System

Authors: Y. D. Lim, K. S. Yap, K. T. Ooi

Abstract:

In recent years, environmental issue has been a hot topic in the world, especially the global warming effect caused by conventional non-environmentally friendly refrigerants has increased. Several studies of a more energy-efficient and environmentally friendly refrigeration system have been conducted in order to tackle the issue. In search of a better refrigeration system, CO2 refrigeration system has been proposed as a better option. However, the high throttling loss involved during the expansion process of the refrigeration cycle leads to a relatively low efficiency and thus the system is impractical. In order to improve the efficiency of the refrigeration system, it is suggested by replacing the conventional expansion valve in the refrigeration system with an expander. Based on this issue, a new type of expander-compressor combined unit, named Cross Vane Expander-Compressor (CVEC) was introduced to replace the compressor and the expansion valve of a conventional refrigeration system. A mathematical model was developed to calculate the performance of CVEC, and it was found that the machine is capable of saving the energy consumption of a refrigeration system by as much as 18%. Apart from energy saving, CVEC is also geometrically simpler and more compact. To further improve its efficiency, optimization study of the device is carried out. In this report, several design parameters of CVEC were chosen to be the variables of optimization study. This optimization study was done in a simulation program by using complex optimization method, which is a direct search, multi-variables and constrained optimization method. It was found that the main design parameters, which was shaft radius was reduced around 8% while the inner cylinder radius was remained unchanged at its lower limit after optimization. Furthermore, the port sizes were increased to their upper limit after optimization. The changes of these design parameters have resulted in reduction of around 12% in the total frictional loss and reduction of 4% in power consumption. Eventually, the optimization study has resulted in an improvement in the mechanical efficiency CVEC by 4% and improvement in COP by 6%.

Keywords: complex optimization method, COP, cross vane expander-compressor, CVEC, design optimization, direct search, energy saving, improvement, mechanical efficiency, multi variables

Procedia PDF Downloads 373
345 Physics-Informed Neural Network for Predicting Strain Demand in Inelastic Pipes under Ground Movement with Geometric and Soil Resistance Nonlinearities

Authors: Pouya Taraghi, Yong Li, Nader Yoosef-Ghodsi, Muntaseer Kainat, Samer Adeeb

Abstract:

Buried pipelines play a crucial role in the transportation of energy products such as oil, gas, and various chemical fluids, ensuring their efficient and safe distribution. However, these pipelines are often susceptible to ground movements caused by geohazards like landslides, fault movements, lateral spreading, and more. Such ground movements can lead to strain-induced failures in pipes, resulting in leaks or explosions, leading to fires, financial losses, environmental contamination, and even loss of human life. Therefore, it is essential to study how buried pipelines respond when traversing geohazard-prone areas to assess the potential impact of ground movement on pipeline design. As such, this study introduces an approach called the Physics-Informed Neural Network (PINN) to predict the strain demand in inelastic pipes subjected to permanent ground displacement (PGD). This method uses a deep learning framework that does not require training data and makes it feasible to consider more realistic assumptions regarding existing nonlinearities. It leverages the underlying physics described by differential equations to approximate the solution. The study analyzes various scenarios involving different geohazard types, PGD values, and crossing angles, comparing the predictions with results obtained from finite element methods. The findings demonstrate a good agreement between the results of the proposed method and the finite element method, highlighting its potential as a simulation-free, data-free, and meshless alternative. This study paves the way for further advancements, such as the simulation-free reliability assessment of pipes subjected to PGD, as part of ongoing research that leverages the proposed method.

Keywords: strain demand, inelastic pipe, permanent ground displacement, machine learning, physics-informed neural network

Procedia PDF Downloads 61
344 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 284
343 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 266
342 Effects of Safety Intervention Program towards Behaviors among Rubber Wood Processing Workers Using Theory of Planned Behavior

Authors: Junjira Mahaboon, Anongnard Boonpak, Nattakarn Worrasan, Busma Kama, Mujalin Saikliang, Siripor Dankachatarn

Abstract:

Rubber wood processing is one of the most important industries in southern Thailand. The process has several safety hazards for example unsafe wood cutting machine guarding, wood dust, noise, and heavy lifting. However, workers’ occupational health and safety measures to promote their behaviors are still limited. This quasi-experimental research was to determine factors affecting workers’ safety behaviors using theory of planned behavior after implementing job safety intervention program. The purposes were to (1) determine factors affecting workers’ behaviors and (2) to evaluate effectiveness of the intervention program. The sample of study was 66 workers from a rubber wood processing factory. Factors in the Theory of Planned Behavior model (TPB) were measured before and after the intervention. The factors of TPB included attitude towards behavior, subjective norm, perceived behavioral control, intention, and behavior. Firstly, Job Safety Analysis (JSA) was conducted and Safety Standard Operation Procedures (SSOP) were established. The questionnaire was also used to collect workers’ characteristics and TPB factors. Then, job safety intervention program to promote workers’ behavior according to SSOP were implemented for a four month period. The program included SSOP training, personal protective equipment use, and safety promotional campaign. After that, the TPB factors were again collected. Paired sample t-test and independent t-test were used to analyze the data. The result revealed that attitude towards behavior and intention increased significantly after the intervention at p<0.05. These factors also significantly determined the workers’ safety behavior according to SSOP at p<0.05. However, subjective norm, and perceived behavioral control were not significantly changed nor related to safety behaviors. In conclusion, attitude towards behavior and workers’ intention should be promoted to encourage workers’ safety behaviors. SSOP intervention program e.g. short meeting, safety training, and promotional campaign should be continuously implemented in a routine basis to improve workers’ behavior.

Keywords: job safety analysis, rubber wood processing workers, safety standard operation procedure, theory of planned behavior

Procedia PDF Downloads 195
341 Agricultural Cooperative Model: A Panacea for Economic Development of Small Scale Business Famers in Ilesha, Osun State, Nigeria

Authors: Folasade Adegbaju, Olusola Arowolo, Olufisayo Onawumi

Abstract:

Owolowo ile – ege garri processing industry which is a small scale cassava processing industry, located in Ilesha, Osun State was purposively selected as a case study because it is a cooperative business. This industry was established in 1991 by eight men (8) who were mostly retirees. A researcher made questionnaire was used to collect information from thirty (30) respondents: the manager, four official staffs and 25 randomly selected processors in the industry. The study found that within twelve years of the utilization of their self raised initial capital of N240, 000 naira (Two hundred and forty thousand naira) this cassava – based industry had impacted on and attracted the involvement of many more people because within the period of the study (i.e. 2007-2011) the processors had quadrupled in number (e.g. 8 to 30), the facilities (equipment) in use had increased from one machine and a frying pot to many, this translated into being able to produce large quantities of fried garri, fufu and also starch for marketing to the people in Ilesha and neighbouring cities like Ibadan, Lagos, etc. This is indicative of economic growth. The industry also became a source of employment for community members in the sense that, as at the time of study four staffs were employed to work and coordinate the industry. It was observed that despite all odds of small-scale industry and the problem of people migrating from rural to urban area, this agro-based industry still existed successfully in the community, and many of such industry can be replicated by such agricultural cooperative groups nationwide so as to further boost the productivity as well as the economy of the area and nation at large. However, government and individual still have major roles to play in ensuring the growth and development of the nation in this respect.The local agricultural cooperative groups should form regional cooperative consortium with more networking for the farmers, in order to create more jobs for the young ones and to increase agricultural productivity in the country thus resulting in a better and more sustainable economy.

Keywords: agricultural cooperative, cassava processing industry, model, small scale enterprise

Procedia PDF Downloads 292
340 Knowledge of the Doctors Regarding International Patient Safety Goal

Authors: Fatima Saeed, Abdullah Mudassar

Abstract:

Introduction: Patient safety remains a global priority in the ever-evolving healthcare landscape. At the forefront of this endeavor are the International Patient Safety Goals (IPSGs), a standardized framework designed to mitigate risks and elevate the quality of care. Doctors, positioned as primary caregivers, wield a pivotal role in upholding and adhering to IPSGs, underscoring the critical significance of their knowledge and understanding of these goals. This research embarks on a comprehensive exploration into the depth of Doctors ' comprehension of IPSGs, aiming to unearth potential gaps and provide insights for targeted educational interventions. Established by influential healthcare bodies, including the World Health Organization (WHO), IPSGs represent a universally applicable set of objectives spanning crucial domains such as medication safety, infection control, surgical site safety, and patient identification. Adherence to these goals has exhibited substantial reductions in adverse events, fostering an overall enhancement in the quality of care. This study operates on the fundamental premise that an informed Doctors workforce is indispensable for effectively implementing IPSGs. A nuanced understanding of these goals empowers Doctors to identify potential risks, advocate for necessary changes, and actively contribute to a safety-centric culture within healthcare institutions. Despite the acknowledged importance of IPSGs, there is a growing concern that nurses may need more knowledge to integrate these goals into their practice seamlessly. Methodology: A Comprehensive research methodology covering study design, setting, duration, sample size determination, sampling technique, and data analysis. It introduces the philosophical framework guiding the research and details material, methods, and the analysis framework. The descriptive quantitative cross-sectional study in teaching care hospitals utilized convenient sampling over six months. Data collection involved written informed consent and questionnaires, analyzed with SPSS version 23, presenting results graphically and descriptively. The chapter ensures a clear understanding of the study's design, execution, and analytical processes. Result: The survey results reveal a substantial distribution across hospitals, with 34.52% in MTIKTH and 65.48% in HMC MTI. There is a notable prevalence of patient safety incidents, emphasizing the significance of adherence to IPSGs. Positive trends are observed, including 77.0% affirming the "time-out" procedure, 81.6% acknowledging effective healthcare provider communication, and high recognition (82.7%) of the purpose of IPSGs to improve patient safety. While the survey reflects a good understanding of IPSGs, areas for improvement are identified, suggesting opportunities for targeted interventions. Discussion: The study underscores the need for tailored care approaches and highlights the bio-socio-cultural context of 'contagion,' suggesting areas for further research amid antimicrobial resistance. Shifting the focus to patient safety practices, the survey chapter provides a detailed overview of results, emphasizing workplace distribution, patient safety incidents, and positive reflections on IPSGs. The findings indicate a positive trend in patient safety practices with areas for improvement, emphasizing the ongoing need for reinforcing safety protocols and cultivating a safety-centric culture in healthcare. Conclusion: In summary, the survey indicates a positive trend in patient safety practices with a good understanding of IPSGs among participants. However, identifying areas for potential improvement suggests opportunities for targeted interventions to enhance patient safety further. Ongoing efforts to reinforce adherence to safety protocols, address identified gaps, and foster a safety culture will contribute to continuous improvements in patient care and outcomes.

Keywords: infection control, international patient safety, patient safety practices, proper medication

Procedia PDF Downloads 55
339 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 159
338 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 81
337 Management Practices and Economic Performance of Smallholder Dairy Cattle Farms in Southern Vietnam

Authors: Ngoc-Hieu Vu

Abstract:

Although dairy production in Vietnam is a relatively new agricultural activity, milk production increased remarkably in recent years. Smallholders are still the main drivers for this development, especially in the southern part of the country. However, information on the farming practices is very limited. Therefore, this study aimed to characterize husbandry practices, educational experiences, decision-making practices, constraints, income and expenses of smallholder dairy farms in Southern Vietnam. A total of 200 farms, located in the regions Ho Chi Minh (HCM, N=80 farms), Lam Dong (N=40 farms), Binh Duong (N=40 farms) and Long An (N=40 farms) were included. Between October 2013 and December 2014 farmers were interviewed twice. On average, farms owned 3.200m2, 2.000m2, and 193m2 of pasture, cropping and housing area, respectively. The number of total, milking and dry cows, heifers, and calves were 20.4, 11.6, 4.7, 3.3, and 2.9 head. The number of lactating dairy cows was higher (p<0.001) in HCM (15.5) and Lam Dong (14.7) than in Binh Duong (6.7) and Long An (10.7). Animals were mainly crossbred Holstein-Friesian (HF) cows with at least 75% HF origin (84%), whereas a higher (P<0.001) percentage of purebred HF was found in HCM and Lam Dong and crossbreds in Binh Duong and Long An. Animals were mainly raised in tie-stalls (94%) and machine-milked (80%). Farmers used their own replacement animals (76%), and both genetic and phenotypic information (67%) for selecting sires. Farmers were predominantly educated at primary school level (53%). Major constraints for dairy farming were the lack of capital (43%), diseases (17%), marketing (22%), lack of knowledge (8%) and feed (7%). Monthly profit per lactating cow was superior in Lam Dong (2,817 thousand VND) and HCM (2,798 thousand VND) compared to other regions in Long An (2,597 thousand VND), and Binh Duong (1,775 thousand VND). Regional differences may be mainly attributed to environmental factors, urbanization, and particularly governmental support and the availability of extension and financial institutions. Results from this study provide important information on farming practices of smallholders in Southern Vietnam that are useful in determining regions that need to be addressed by authorities in order to improve dairy production.

Keywords: dairy farms, milk yield, Southern Vietnam, socio-economics

Procedia PDF Downloads 465