Search results for: machine translation
536 Enhancing Vehicle Efficiency Through Vapor Absorption Refrigeration Systems
Authors: Yoftahe Nigussie Worku
Abstract:
This paper explores the utilization of vapor absorption refrigeration systems (VARS) as an alternative to the conventional vapor compression refrigerant systems (VCRS) in vehicle air conditioning (AC) systems. Currently, most vehicles employ VCRS, which relies on engine power to drive the compressor, leading to additional fuel consumption. In contrast, VARS harnesses low-grade heat, specifically from the exhaust of high-power internal combustion engines, reducing the burden on the vehicle's engine. The historical development of vapor absorption technology is outlined, dating back to Michael Faraday's discovery in 1824 and the subsequent creation of the first vapor absorption refrigeration machine by Ferdinand Carre in 1860. The paper delves into the fundamental principles of VARS, emphasizing the replacement of mechanical processes with physicochemical interactions, utilizing heat rather than mechanical work. The study compares the basic concepts of the current vapor compression systems with the proposed vapor absorption systems, highlighting the efficiency gains achieved by eliminating the need for engine-driven compressors. The vapor absorption refrigeration cycle (VARC) is detailed, focusing on the generator's role in separating and vaporizing ammonia, chosen for its low-temperature evaporation characteristics. The project's statement underscores the need for increased efficiency in vehicle AC systems beyond the limitations of VCRS. By introducing VARS, driven by low-grade heat, the paper advocates for a reduction in engine power consumption and, consequently, a decrease in fuel usage. This research contributes to the ongoing efforts to enhance sustainability and efficiency in automotive climate control systems.Keywords: VCRS, VARS, efficiency, sustainability
Procedia PDF Downloads 75535 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 82534 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome
Authors: Geetanjali Sharma
Abstract:
Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.Keywords: brainstem, demyelination, evoked potential, Guillain Barre’
Procedia PDF Downloads 303533 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease
Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera
Abstract:
At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.Keywords: SERS, Raman, PLS-DA, kidney diseases
Procedia PDF Downloads 45532 Improving Efficiency and Effectiveness of FMEA Studies
Authors: Joshua Loiselle
Abstract:
This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.Keywords: FMEA, quality, APQP, PPAP
Procedia PDF Downloads 304531 Integration of an Evidence-Based Medicine Curriculum into Physician Assistant Education: Teaching for Today and the Future
Authors: Martina I. Reinhold, Theresa Bacon-Baguley
Abstract:
Background: Medical knowledge continuously evolves and to help health care providers to stay up-to-date, evidence-based medicine (EBM) has emerged as a model. The practice of EBM requires new skills of the health care provider, including directed literature searches, the critical evaluation of research studies, and the direct application of the findings to patient care. This paper describes the integration and evaluation of an evidence-based medicine course sequence into a Physician Assistant curriculum. This course sequence teaches students to manage and use the best clinical research evidence to competently practice medicine. A survey was developed to assess the outcomes of the EBM course sequence. Methodology: The cornerstone of the three-semester sequence of EBM are interactive small group discussions that are designed to introduce students to the most clinically applicable skills to identify, manage and use the best clinical research evidence to improve the health of their patients. During the three-semester sequence, the students are assigned each semester to participate in small group discussions that are facilitated by faculty with varying background and expertise. Prior to the start of the first EBM course in the winter semester, PA students complete a knowledge-based survey that was developed by the authors to assess the effectiveness of the course series. The survey consists of 53 Likert scale questions that address the nine objectives for the course series. At the end of the three semester course series, the same survey was given to all students in the program and the results from before, and after the sequence of EBM courses are compared. Specific attention is paid to overall performance of students in the nine course objectives. Results: We find that students from the Class of 2016 and 2017 consistently improve (as measured by percent correct responses on the survey tool) after the EBM course series (Class of 2016: Pre- 62% Post- 75%; Class of 2017: Pre- 61 % Post-70%). The biggest increase in knowledge was observed in the areas of finding and evaluating the evidence, with asking concise clinical questions (Class of 2016: Pre- 61% Post- 81%; Class of 2017: Pre- 61 % Post-75%) and searching the medical database (Class of 2016: Pre- 24% Post- 65%; Class of 2017: Pre- 35 % Post-66 %). Questions requiring students to analyze, evaluate and report on the available clinical evidence regarding diagnosis showed improvement, but to a lesser extend (Class of 2016: Pre- 56% Post- 77%; Class of 2017: Pre- 56 % Post-61%). Conclusions: Outcomes identified that students did gain skills which will allow them to apply EBM principles. In addition, the outcomes of the knowledge-based survey allowed the faculty to focus on areas needing improvement, specifically the translation of best evidence into patient care. To address this area, the clinical faculty developed case scenarios that were incorporated into the lecture and discussion sessions, allowing students to better connect the research studies with patient care. Students commented that ‘class discussion and case examples’ contributed most to their learning and that ‘it was helpful to learn how to develop research questions and how to analyze studies and their significance to a potential client’. As evident by the outcomes, the EBM courses achieved the goals of the course and were well received by the students.Keywords: evidence-based medicine, clinical education, assessment tool, physician assistant
Procedia PDF Downloads 125530 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 473529 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome
Authors: Agada N. Ihuoma, Nagata Yasunori
Abstract:
Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.Keywords: artificial Intelligence, backward elimination, linear regression, solar energy
Procedia PDF Downloads 157528 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential
Authors: Rasheed Amao Busari, Ahmed Ibrahim
Abstract:
The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit
Procedia PDF Downloads 79527 Effect of Shot Peening on the Mechanical Properties for Welded Joints of Aluminium Alloy 6061-T6
Authors: Muna Khethier Abbass, Khairia Salman Hussan, Huda Mohummed AbdudAlaziz
Abstract:
This work aims to study the effect of shot peening on the mechanical properties of welded joints which performed by two different welding processes: Tungsten inert gas (TIG) welding and friction stir welding (FSW) processes of aluminum alloy 6061 T6. Arc welding process (TIG) was carried out on the sheet with dimensions of (100x50x6 mm) to obtain many welded joints with using electrode type ER4043 (AlSi5) as a filler metal and argon as shielding gas. While the friction stir welding process was carried out using CNC milling machine with a tool of rotational speed (1000 rpm) and welding speed of (20 mm/min) to obtain the same butt welded joints. The welded pieces were tested by X-ray radiography to detect the internal defects and faulty welded pieces were excluded. Tensile test specimens were prepared from welded joints and base alloy in the dimensions according to ASTM17500 and then subjected to shot peening process using steel ball of diameter 0.9 mm and for 15 min. All specimens were subjected to Vickers hardness test and micro structure examination to study the effect of welding process (TIG and FSW) on the micro structure of the weld zones. Results showed that a general decay of mechanical properties of TIG and FSW welded joints comparing with base alloy while the FSW welded joint gives better mechanical properties than that of TIG welded joint. This is due to the micro structure changes during the welding process. It has been found that the surface hardening by shot peening improved the mechanical properties of both welded joints, this is due to the compressive residual stress generation in the weld zones which was measured using X-Ray diffraction (XRD) inspection.Keywords: friction stir welding, TIG welding, mechanical properties, shot peening
Procedia PDF Downloads 340526 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 70525 Effect of High Intensity Ultrasonic Treatment on the Micro Structure, Corrosion and Mechanical Behavior of ac4c Aluminium Alloy
Authors: A.Farrag Farrag, A. M. El-Aziz Abdel Aziz, W. Khlifa Khlifa
Abstract:
Ultrasonic treatment is a promising process nowadays in the engineering field due to its high efficiency and it is a low-cost process. It enhances mechanical properties, corrosion resistance, and homogeneity of the microstructure. In this study, the effect of ultrasonic treatment and several casting conditions on microstructure, hardness and corrosion behavior of AC4C aluminum alloy was examined. Various ultrasonic treatments of the AC4C alloys were carried out to prepare billets for thixocasting process. Treatment temperatures varied from about 630oC and cooled down to under ultrasonic field. Treatment time was about 90s. A 600-watts ultrasonic system with 19.5 kHz and intensity of 170 W/cm2 was used. Billets were reheated to semisolid state and held for 5 minutes at 582 oC and temperatures (soaking) using high-frequency induction system, then thixocasted using a die casting machine. Microstructures of the thixocast parts were studied using optical and SEM microscopes. On the other hand, two samples were conventionally cast and poured at 634 oC and 750 oC. The microstructure showed a globular none dendritic grains for AC4C with the application of UST at 630-582 oC, Less dendritic grains when the sample was conventionally cast without the application of UST and poured at 624 oC and a fully dendritic microstructure When the sample was cast and poured at 750 oC without UST .The ultrasonic treatment during solidification proved that it has a positive influence on the microstructure as it produced the finest and globular grains thus it is expected to increase the mechanical properties of the alloy. Higher values of corrosion resistance and hardness were recorded for the ultrasound-treated sample in comparison to cast one.Keywords: ultrasonic treatment, aluminum alloys, corrosion behaviour, mechanical behaviour, microstructure
Procedia PDF Downloads 353524 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli
Authors: Joanna Maj
Abstract:
Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering
Procedia PDF Downloads 64523 Modeling of a Pilot Installation for the Recovery of Residual Sludge from Olive Oil Extraction
Authors: Riad Benelmir, Muhammad Shoaib Ahmed Khan
Abstract:
The socio-economic importance of the olive oil production is significant in the Mediterranean region, both in terms of wealth and tradition. However, the extraction of olive oil generates huge quantities of wastes that may have a great impact on land and water environment because of their high phytotoxicity. Especially olive mill wastewater (OMWW) is one of the major environmental pollutants in olive oil industry. This work projects to design a smart and sustainable integrated thermochemical catalytic processes of residues from olive mills by hydrothermal carbonization (HTC) of olive mill wastewater (OMWW) and fast pyrolysis of olive mill wastewater sludge (OMWS). The byproducts resulting from OMWW-HTC treatment are a solid phase enriched in carbon, called biochar and a liquid phase (residual water with less dissolved organic and phenolic compounds). HTC biochar can be tested as a fuel in combustion systems and will also be utilized in high-value applications, such as soil bio-fertilizer and as catalyst or/and catalyst support. The HTC residual water is characterized, treated and used in soil irrigation since the organic and the toxic compounds will be reduced under the permitted limits. This project’s concept includes also the conversion of OMWS to a green diesel through a catalytic pyrolysis process. The green diesel is then used as biofuel in an internal combustion engine (IC-Engine) for automotive application to be used for clean transportation. In this work, a theoretical study is considered for the use of heat from the pyrolysis non-condensable gases in a sorption-refrigeration machine for pyrolysis gases cooling and condensation of bio-oil vapors.Keywords: biomass, olive oil extraction, adsorption cooling, pyrolisis
Procedia PDF Downloads 90522 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation
Authors: Harini Chakkera
Abstract:
Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.Keywords: kidney, transplant, diabetes, insulin
Procedia PDF Downloads 90521 Remote Sensing through Deep Neural Networks for Satellite Image Classification
Authors: Teja Sai Puligadda
Abstract:
Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss
Procedia PDF Downloads 159520 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 498519 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity
Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink
Abstract:
The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction
Procedia PDF Downloads 313518 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms
Authors: Joy Chadwick
Abstract:
There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge
Procedia PDF Downloads 79517 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 63516 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process
Authors: Yu-Hsuan Liu, Ying-Fang Wang
Abstract:
The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region
Procedia PDF Downloads 284515 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 181514 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions
Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola
Abstract:
Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design
Procedia PDF Downloads 119513 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 81512 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 235511 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 115510 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 163509 Automation of Pneumatic Seed Planter for System of Rice Intensification
Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim
Abstract:
Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.Keywords: automation, calibration, pneumatic seed planter, system of rice intensification
Procedia PDF Downloads 643508 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 24507 A Study Problem and Needs Compare the Held of the Garment Industries in Nonthaburi and Bangkok Area
Authors: Thepnarintra Praphanphat
Abstract:
The purposes of this study were to investigate garment industry’s condition, problems, and need for assistance. The population of the study was 504 managers or managing directors of garment establishments finished apparel industrial manager and permission of the Department of Industrial Works 28, Ministry of Industry until January 1, 2012. In determining the sample size with the opening of the Taro Yamane finished at 95% confidence level is ± 5% deviation was 224 managers. Questionnaires were used to collect the data. Percentage, frequency, arithmetic mean, standard deviation, t-test, ANOVA, and LSD were used to analyze the data. It was found that most establishments were of a large size, operated in a form of limited company for more than 15 years most of which produced garments for working women. All investment was made by Thai people. The products were made to order and distributed domestically and internationally. The total sale of the year 2010, 2011, and 2012 was almost the same. With respect to the problems of operating the business, the study indicated, as a whole, by- aspects, and by-items, that they were at a high level. The comparison of the level of problems of operating garment business as classified by general condition showed that problems occurring in business of different sizes were, as a whole, not different. In taking aspects into consideration, it was found that the level of problem in relation to production was different; medium establishments had more problems in production than those of small and large sizes. According to the by-items analysis, five problems were found different; namely, problems concerning employees, machine maintenance, number of designers, and price competition. Such problems in the medium establishments were at a higher level than those in the small and large establishments. Regarding business age, the examination yielded no differences as a whole, by-aspects, and by-items. The statistical significance level of this study was set at .05.Keywords: garment industry, garment, fashion, competitive enhancement project
Procedia PDF Downloads 188