Search results for: arrival time prediction
18637 Controlling the Process of a Chicken Dressing Plant through Statistical Process Control
Authors: Jasper Kevin C. Dionisio, Denise Mae M. Unsay
Abstract:
In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.Keywords: motion and time study, process controlling, statistical process control, X-Bar R Control chart
Procedia PDF Downloads 21718636 Multi-Channel Charge-Coupled Device Sensors Real-Time Cell Growth Monitor System
Authors: Han-Wei Shih, Yao-Nan Wang, Ko-Tung Chang, Lung-Ming Fu
Abstract:
A multi-channel cell growth real-time monitor and evaluation system using charge-coupled device (CCD) sensors with 40X lens integrating a NI LabVIEW image processing program is proposed and demonstrated. The LED light source control of monitor system is utilizing 8051 microprocessor integrated with NI LabVIEW software. In this study, the same concentration RAW264.7 cells growth rate and morphology in four different culture conditions (DMEM, LPS, G1, G2) were demonstrated. The real-time cells growth image was captured and analyzed by NI Vision Assistant every 10 minutes in the incubator. The image binarization technique was applied for calculating cell doubling time and cell division index. The cells doubling time and cells division index of four group with DMEM, LPS, LPS+G1, LPS+G2 are 12.3 hr,10.8 hr,14.0 hr,15.2 hr and 74.20%, 78.63%, 69.53%, 66.49%. The image magnification of multi-channel CCDs cell real-time monitoring system is about 100X~200X which compares with the traditional microscope.Keywords: charge-coupled device (CCD), RAW264.7, doubling time, division index
Procedia PDF Downloads 35818635 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition
Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria
Abstract:
Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses
Procedia PDF Downloads 15418634 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 14418633 A CMOS Capacitor Array for ESPAR with Fast Switching Time
Authors: Jin-Sup Kim, Se-Hwan Choi, Jae-Young Lee
Abstract:
A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation.Keywords: CMOS capacitor array, ESPAR, SOI, SOS, switching time
Procedia PDF Downloads 59018632 Gene Prediction in DNA Sequences Using an Ensemble Algorithm Based on Goertzel Algorithm and Anti-Notch Filter
Authors: Hamidreza Saberkari, Mousa Shamsi, Hossein Ahmadi, Saeed Vaali, , MohammadHossein Sedaaghi
Abstract:
In the recent years, using signal processing tools for accurate identification of the protein coding regions has become a challenge in bioinformatics. Most of the genomic signal processing methods is based on the period-3 characteristics of the nucleoids in DNA strands and consequently, spectral analysis is applied to the numerical sequences of DNA to find the location of periodical components. In this paper, a novel ensemble algorithm for gene selection in DNA sequences has been presented which is based on the combination of Goertzel algorithm and anti-notch filter (ANF). The proposed algorithm has many advantages when compared to other conventional methods. Firstly, it leads to identify the coding protein regions more accurate due to using the Goertzel algorithm which is tuned at the desired frequency. Secondly, faster detection time is achieved. The proposed algorithm is applied on several genes, including genes available in databases BG570 and HMR195 and their results are compared to other methods based on the nucleotide level evaluation criteria. Implementation results show the excellent performance of the proposed algorithm in identifying protein coding regions, specifically in identification of small-scale gene areas.Keywords: protein coding regions, period-3, anti-notch filter, Goertzel algorithm
Procedia PDF Downloads 38718631 Real-Time Detection of Space Manipulator Self-Collision
Authors: Zhang Xiaodong, Tang Zixin, Liu Xin
Abstract:
In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.Keywords: space manipulator, collision detection, self-collision, the real-time collision detection
Procedia PDF Downloads 46918630 The Evolution of Moral Politics: Analysis on Moral Foundations of Korean Parties
Authors: Changdong Oh
Abstract:
With the arrival of post-industrial society, social scientists have been giving attention to issues of which factors shape cleavage of political parties. Especially, there is a heated controversy over whether and how social and cultural values influence the identities of parties and voting behavior. Drawing from Moral Foundations Theory (MFT), which approached similar issues by considering the effect of five moral foundations on political decision-making of people, this study investigates the role of moral rhetoric in the evolution of Korean political parties. Researcher collected official announcements released by the major two parties (Democratic Party of Korea, Saenuri Party) from 2007 to 2016, and analyzed the data by using Word2Vec algorithm and Moral Foundations Dictionary. Five moral decision modules of MFT, composed of care, fairness (individualistic morality), loyalty, authority and sanctity (group-based, Durkheimian morality), can be represented in vector spaces consisted of party announcements data. By comparing the party vector and the five morality vectors, researcher can see how the political parties have actively used each of the five moral foundations to express themselves and the opposition. Results report that the conservative party tends to actively draw on collective morality such as loyalty, authority, purity to differentiate itself. Notably, such moral differentiation strategy is prevalent when they criticize an opposition party. In contrast, the liberal party tends to concern with individualistic morality such as fairness. This result indicates that moral cleavage does exist between parties in South Korea. Furthermore, individualistic moral gaps of the two political parties are eased over time, which seems to be due to the discussion of economic democratization of conservative party that emerged after 2012, but the community-related moral gaps widened. These results imply that past political cleavages related to economic interests are diminishing and replaced by cultural and social values associated with communitarian morality. However, since the conservative party’s differentiation strategy is largely related to negative campaigns, it is doubtful whether such moral differentiation among political parties can contribute to the long-term party identification of the voters, thus further research is needed to determine it is sustainable. Despite the limitations, this study makes it possible to track and identify the moral changes of party system through automated text analysis. More generally, this study could contribute to the analysis of various texts associated with the moral foundation and finding a distributed representation of moral, ethical values.Keywords: moral foundations theory, moral politics, party system, Word2Vec
Procedia PDF Downloads 36218629 Determinants of Consultation Time at a Family Medicine Center
Authors: Ali Alshahrani, Adel Almaai, Saad Garni
Abstract:
Aim of the study: To explore duration and determinants of consultation time at a family medicine center. Methodology: This study was conducted at the Family Medicine Center in Ahad Rafidah City, at the southwestern part of Saudi Arabia. It was conducted on the working days of March 2013. Trained nurses helped in filling in the checklist. A total of 459 patients were included. A checklist was designed and used in this study. It included patient’s age, sex, diagnosis, type of visit, referral and its type, psychological problems and additional work-up. In addition, number of daily bookings, physician`s experience and consultation time. Results: More than half of patients (58.39%) had less than 10 minutes’ consultation (Mean+SD: 12.73+9.22 minutes). Patients treated by physicians with shortest experience (i.e., ≤5 years) had the longest consultation time while those who were treated with physicians with the longest experience (i.e., > 10 years) had the shortest consultation time (13.94±10.99 versus 10.79±7.28, p=0.011). Regarding patients’ diagnosis, those with chronic diseases had the longest consultation time (p<0.001). Patients who did not need referral had significantly shorter consultation time compared with those who had routine or urgent referral (11.91±8.42,14.60±9.03 and 22.42±14.81 minutes, respectively, p<0.001). Patients with associated psychological problems needed significantly longer consultation time than those without associated psychological problems (20.06±13.32 versus 12.45±8.93, p<0.001). Conclusions: The average length of consultation time at Ahad Rafidah Family Medicine Center is approximately 13 minutes. Less-experienced physicians tend to spend longer consultation times with patients. Referred patients, those with psychological problems, those with chronic diseases tend to have longer consultation time. Recommendations: Family physicians should be encouraged to keep their optimal consultation time. Booking an adequate number of patients per shift would allow the family physician to provide enough consultation time for each patient.Keywords: consultation, quality, medicine, clinics
Procedia PDF Downloads 28718628 Universe at Zero Second and the Creation Process of the First Particle from the Absolute Void
Authors: Shivan Sirdy
Abstract:
In this study, we discuss the properties of absolute void space or the universe at zero seconds, and how these properties play a vital role in creating a mechanism in which the very first particle gets created simultaneously everywhere. We find the limit in which when the absolute void volume reaches will lead to the collapse that leads to the creation of the first particle. This discussion is made following the elementary dimensions theory study that was peer-reviewed at the end of 2020; everything in the universe is made from four elementary dimensions, these dimensions are the three spatial dimensions (X, Y, and Z) and the Void resistance as the factor of change among the four. Time itself was not considered as the fourth dimension. Rather time corresponds to a factor of change, and during the research, it was found out that the Void resistance is the factor of change in the absolute Void space, where time is a hypothetical concept that represents changes during certain events compared to a constant change rate event. Therefore, time does exist, but as a factor of change as the Void resistance: Time= factor of change= Void resistance.Keywords: elementary dimensions, absolute void, time alternative, early universe, universe at zero second, Void resistant, Hydrogen atom, Hadron field, Lepton field
Procedia PDF Downloads 20218627 Capacity Optimization in Cooperative Cognitive Radio Networks
Authors: Mahdi Pirmoradian, Olayinka Adigun, Christos Politis
Abstract:
Cooperative spectrum sensing is a crucial challenge in cognitive radio networks. Cooperative sensing can increase the reliability of spectrum hole detection, optimize sensing time and reduce delay in cooperative networks. In this paper, an efficient central capacity optimization algorithm is proposed to minimize cooperative sensing time in a homogenous sensor network using OR decision rule subject to the detection and false alarm probabilities constraints. The evaluation results reveal significant improvement in the sensing time and normalized capacity of the cognitive sensors.Keywords: cooperative networks, normalized capacity, sensing time
Procedia PDF Downloads 63318626 Trauma Scores and Outcome Prediction After Chest Trauma
Authors: Mohamed Abo El Nasr, Mohamed Shoeib, Abdelhamid Abdelkhalik, Amro Serag
Abstract:
Background: Early assessment of severity of chest trauma, either blunt or penetrating is of critical importance in prediction of patient outcome. Different trauma scoring systems are widely available and are based on anatomical or physiological parameters to expect patient morbidity or mortality. Up till now, there is no ideal, universally accepted trauma score that could be applied in all trauma centers and is suitable for assessment of severity of chest trauma patients. Aim: Our aim was to compare various trauma scoring systems regarding their predictability of morbidity and mortality in chest trauma patients. Patients and Methods: This study was a prospective study including 400 patients with chest trauma who were managed at Tanta University Emergency Hospital, Egypt during a period of 2 years (March 2014 until March 2016). The patients were divided into 2 groups according to the mode of trauma: blunt or penetrating. The collected data included age, sex, hemodynamic status on admission, intrathoracic injuries, and associated extra-thoracic injuries. The patients outcome including mortality, need of thoracotomy, need for ICU admission, need for mechanical ventilation, length of hospital stay and the development of acute respiratory distress syndrome were also recorded. The relevant data were used to calculate the following trauma scores: 1. Anatomical scores including abbreviated injury scale (AIS), Injury severity score (ISS), New injury severity score (NISS) and Chest wall injury scale (CWIS). 2. Physiological scores including revised trauma score (RTS), Acute physiology and chronic health evaluation II (APACHE II) score. 3. Combined score including Trauma and injury severity score (TRISS ) and 4. Chest-Specific score Thoracic trauma severity score (TTSS). All these scores were analyzed statistically to detect their sensitivity, specificity and compared regarding their predictive power of mortality and morbidity in blunt and penetrating chest trauma patients. Results: The incidence of mortality was 3.75% (15/400). Eleven patients (11/230) died in blunt chest trauma group, while (4/170) patients died in penetrating trauma group. The mortality rate increased more than three folds to reach 13% (13/100) in patients with severe chest trauma (ISS of >16). The physiological scores APACHE II and RTS had the highest predictive value for mortality in both blunt and penetrating chest injuries. The physiological score APACHE II followed by the combined score TRISS were more predictive for intensive care admission in penetrating injuries while RTS was more predictive in blunt trauma. Also, RTS had a higher predictive value for expectation of need for mechanical ventilation followed by the combined score TRISS. APACHE II score was more predictive for the need of thoracotomy in penetrating injuries and the Chest-Specific score TTSS was higher in blunt injuries. The anatomical score ISS and TTSS score were more predictive for prolonged hospital stay in penetrating and blunt injuries respectively. Conclusion: Trauma scores including physiological parameters have a higher predictive power for mortality in both blunt and penetrating chest trauma. They are more suitable for assessment of injury severity and prediction of patients outcome.Keywords: chest trauma, trauma scores, blunt injuries, penetrating injuries
Procedia PDF Downloads 42118625 Catchment Yield Prediction in an Ungauged Basin Using PyTOPKAPI
Authors: B. S. Fatoyinbo, D. Stretch, O. T. Amoo, D. Allopi
Abstract:
This study extends the use of the Drainage Area Regionalization (DAR) method in generating synthetic data and calibrating PyTOPKAPI stream yield for an ungauged basin at a daily time scale. The generation of runoff in determining a river yield has been subjected to various topographic and spatial meteorological variables, which integers form the Catchment Characteristics Model (CCM). Many of the conventional CCM models adapted in Africa have been challenged with a paucity of adequate, relevance and accurate data to parameterize and validate the potential. The purpose of generating synthetic flow is to test a hydrological model, which will not suffer from the impact of very low flows or very high flows, thus allowing to check whether the model is structurally sound enough or not. The employed physically-based, watershed-scale hydrologic model (PyTOPKAPI) was parameterized with GIS-pre-processing parameters and remote sensing hydro-meteorological variables. The validation with mean annual runoff ratio proposes a decent graphical understanding between observed and the simulated discharge. The Nash-Sutcliffe efficiency and coefficient of determination (R²) values of 0.704 and 0.739 proves strong model efficiency. Given the current climate variability impact, water planner can now assert a tool for flow quantification and sustainable planning purposes.Keywords: catchment characteristics model, GIS, synthetic data, ungauged basin
Procedia PDF Downloads 32718624 Delay-Dependent Passivity Analysis for Neural Networks with Time-Varying Delays
Authors: H. Y. Jung, Jing Wang, J. H. Park, Hao Shen
Abstract:
This brief addresses the passivity problem for neural networks with time-varying delays. The aim is focus on establishing the passivity condition of the considered neural networks.Keywords: neural networks, passivity analysis, time-varying delays, linear matrix inequality
Procedia PDF Downloads 57018623 Forecast Financial Bubbles: Multidimensional Phenomenon
Authors: Zouari Ezzeddine, Ghraieb Ikram
Abstract:
From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks
Procedia PDF Downloads 57718622 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method
Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang
Abstract:
Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter
Procedia PDF Downloads 16318621 Biomechanical Prediction of Veins and Soft Tissues beneath Compression Stockings Using Fluid-Solid Interaction Model
Authors: Chongyang Ye, Rong Liu
Abstract:
Elastic compression stockings (ECSs) have been widely applied in prophylaxis and treatment of chronic venous insufficiency of lower extremities. The medical function of ECS is to improve venous return and increase muscular pumping action to facilitate blood circulation, which is largely determined by the complex interaction between the ECS and lower limb tissues. Understanding the mechanical transmission of ECS along the skin surface, deeper tissues, and vascular system is essential to assess the effectiveness of the ECSs. In this study, a three-dimensional (3D) finite element (FE) model of the leg-ECS system integrated with a 3D fluid-solid interaction (FSI) model of the leg-vein system was constructed to analyze the biomechanical properties of veins and soft tissues under different ECS compression. The Magnetic Resonance Imaging (MRI) of the human leg was divided into three regions, including soft tissues, bones (tibia and fibula) and veins (peroneal vein, great saphenous vein, and small saphenous vein). The ECSs with pressure ranges from 15 to 26 mmHg (Classes I and II) were adopted in the developed FE-FSI model. The soft tissue was assumed as a Neo-Hookean hyperelastic model with the fixed bones, and the ECSs were regarded as an orthotropic elastic shell. The interfacial pressure and stress transmission were simulated by the FE model, and venous hemodynamics properties were simulated by the FSI model. The experimental validation indicated that the simulated interfacial pressure distributions were in accordance with the pressure measurement results. The developed model can be used to predict interfacial pressure, stress transmission, and venous hemodynamics exerted by ECSs and optimize the structure and materials properties of ECSs design, thus improving the efficiency of compression therapy.Keywords: elastic compression stockings, fluid-solid interaction, tissue and vein properties, prediction
Procedia PDF Downloads 11218620 Analyzing Time Lag in Seismic Waves and Its Effects on Isolated Structures
Authors: Faizan Ahmad, Jenna Wong
Abstract:
Time lag between peak values of horizontal and vertical seismic waves is a well-known phenomenon. Horizontal and vertical seismic waves, secondary and primary waves in nature respectively, travel through different layers of soil and the travel time is dependent upon the medium of wave transmission. In seismic analysis, many standardized codes do not require the actual vertical acceleration to be part of the analysis procedure. Instead, a factor load addition for a particular site is used to capture strength demands in case of vertical excitation. This study reviews the effects of vertical accelerations to analyze the behavior of a linearly rubber isolated structure in different time lag situations and frequency content by application of historical and simulated ground motions using SAP2000. The response of the structure is reviewed under multiple sets of ground motions and trends based on time lag and frequency variations are drawn. The accuracy of these results is discussed and evaluated to provide reasoning for use of real vertical excitations in seismic analysis procedures, especially for isolated structures.Keywords: seismic analysis, vertical accelerations, time lag, isolated structures
Procedia PDF Downloads 33618619 Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems
Authors: Tomoaki Hashimoto
Abstract:
Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.Keywords: computational simulations, optimal control, predictive control, stochastic systems, discrete-time systems
Procedia PDF Downloads 43218618 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 7318617 Life Course Events, Residential and Job Relocation and Commute Time in Australian Cities
Authors: Solmaz Jahed Shiran, Elizabeth Taylor, John Hearne
Abstract:
Over the past decade a growing body of research, known as mobility biography approach has emerged that focuses on changes in travel behaviour over the life course of individuals. Mobility biographies suggest that changes in travel behaviour have a certain relation to important key events in life courses such as residential relocation, workplace changes, marriage and the birth of children. Taking this approach as the theoretical background, this study uses data from the Household, Income and Labor Dynamics Survey in Australia (HILDA) to model a set of life course events and their interaction with the commute time. By analysing longitudinal data, it is possible to assign different key events during the life course to change a person’s travel behaviour. Changes in the journey-to-work travel time is used as an indication of travel behaviour change in this study. Results of a linear regression model for change in commute time show a significant influence from socio-demographic factors like income and age, the previous home-to-work commute time and remoteness of the residence. Residential relocation and job change have significant influences on commute time. Other life events such as birth of a child, marriage and divorce or separation have also a strong impact on commute time change. Overall, the research confirms previous studies of links between life course events and travel behaviour.Keywords: life course events, residential mobility, travel behaviour, commute time, job change
Procedia PDF Downloads 20518616 The Difference of Menstrual Cycle Profile and Urinary Luteinizing Hormone Changes In Polycystic Ovary Syndrome And Healthy Women
Authors: Ning Li, Jiacheng Zhang, Zheng Yang, Sylvia Kang
Abstract:
Introduction: Polycystic ovary syndrome (PCOS) is a common physiological symptom in women of reproductive age. Women with PCOS may have infrequent or prolonged menstrual periods and excess male hormone (androgen) levels. Mira analyzes the cycle profiles and the luteinizing hormone (LH) changes in urine, closely related to the fertility level of healthy women and PCOS women. From the difference between the two groups, Mira helps to understand the physiological state of PCOS women and their hormonal changes in the menstrual cycle. Methods: In this study, data from 1496 cycles and information from 342 women belonging to two groups (181 PCOS and 161 Healthy) were collected and analyzed. Women test their luteinizing hormone (LH) in urine daily with Mira fertility test wand and Mira analyzer, from the day after the menstruation to the starting day of the next menstruation. All the collected data meets Mira’s user agreement and users’ identification was removed. The cycle length, LH peak, and other cycle information of the PCOS group were compared with the Healthy group. Results: The average cycle length of PCOS women is 41 days and of the Healthy women is 33 days. 91.4% of cycle length is within 40 days for the Healthy group, while it decreases to 71.9% for the PCOS group. This means PCOS women have a longer menstrual cycle and more variation during the cycle. With more variation, the ovulation prediction becomes more difficult for the PCOS group. The deviation between the LH surge day and the predicted ovulation day, calculated by the starting day of the next menstruation minus 14 days, is greater in the PCOS group compared with the Healthy group. Also, 46.96% of PCOS women have an irregular cycle, and only 19.25% of healthy women show an irregular cycle. Conclusion: PCOS women have longer menstrual cycles and more variation during the menstrual cycles. The traditional ovulation prediction is not suitable for PCOS women.Keywords: menstrual cycle, PCOS, urinary luteinizing hormone, Mira
Procedia PDF Downloads 18018615 Reactivity Study on South African Calcium Based Material Using a pH-Stat and Citric Acid: A Statistical Approach
Authors: Hilary Rutto, Mbali Chiliza, Tumisang Seodigeng
Abstract:
The study on reactivity of calcined calcium-based material is very important in dry flue gas desulphurisation (FGD) process, so as to produce absorbent with high sulphur dioxide capture capacity during the hydration process. The effect of calcining temperature and time on the reactivity of calcined limestone material were investigated. In this study, the reactivity was measured using a pH stat apparatus and also confirming the result by performing citric acid reactivity test. The reactivity was calculated using the shrinking core model. Based on the experiments, a mathematical model is developed to correlate the effect of time and temperature to the reactivity of absorbent. The calcination process variables were temperature (700 -1000°C) and time (1-6 hrs). It was found that reactivity increases with an increase in time and temperature.Keywords: reactivity, citric acid, calcination, time
Procedia PDF Downloads 22018614 Study of Unsteady Behaviour of Dynamic Shock Systems in Supersonic Engine Intakes
Authors: Siddharth Ahuja, T. M. Muruganandam
Abstract:
An analytical investigation is performed to study the unsteady response of a one-dimensional, non-linear dynamic shock system to external downstream pressure perturbations in a supersonic flow in a varying area duct. For a given pressure ratio across a wind tunnel, the normal shock's location can be computed as per one-dimensional steady gas dynamics. Similarly, for some other pressure ratio, the location of the normal shock will change accordingly, again computed using one-dimensional gas dynamics. This investigation focuses on the small-time interval between the first steady shock location and the new steady shock location (corresponding to different pressure ratios). In essence, this study aims to shed light on the motion of the shock from one steady location to another steady location. Further, this study aims to create the foundation of the Unsteady Gas Dynamics field enabling further insight in future research work. According to the new pressure ratio, a pressure pulse, generated at the exit of the tunnel which travels and perturbs the shock from its original position, setting it into motion. During such activity, other numerous physical phenomena also happen at the same time. However, three broad phenomena have been focused on, in this study - Traversal of a Wave, Fluid Element Interactions and Wave Interactions. The above mentioned three phenomena create, alter and kill numerous waves for different conditions. The waves which are created by the above-mentioned phenomena eventually interact with the shock and set it into motion. Numerous such interactions with the shock will slowly make it settle into its final position owing to the new pressure ratio across the duct, as estimated by one-dimensional gas dynamics. This analysis will be extremely helpful in the prediction of inlet 'unstart' of the flow in a supersonic engine intake and its prominence with the incoming flow Mach number, incoming flow pressure and the external perturbation pressure is also studied to help design more efficient supersonic intakes for engines like ramjets and scramjets.Keywords: analytical investigation, compression and expansion waves, fluid element interactions, shock trajectory, supersonic flow, unsteady gas dynamics, varying area duct, wave interactions
Procedia PDF Downloads 21718613 Large Time Asymptotic Behavior to Solutions of a Forced Burgers Equation
Authors: Satyanarayana Engu, Ahmed Mohd, V. Murugan
Abstract:
We study the large time asymptotics of solutions to the Cauchy problem for a forced Burgers equation (FBE) with the initial data, which is continuous and summable on R. For which, we first derive explicit solutions of FBE assuming a different class of initial data in terms of Hermite polynomials. Later, by violating this assumption we prove the existence of a solution to the considered Cauchy problem. Finally, we give an asymptotic approximate solution and establish that the error will be of order O(t^(-1/2)) with respect to L^p -norm, where 1≤p≤∞, for large time.Keywords: Burgers equation, Cole-Hopf transformation, Hermite polynomials, large time asymptotics
Procedia PDF Downloads 33418612 A Fundamental Study for Real-Time Safety Evaluation System of Landing Pier Using FBG Sensor
Authors: Heungsu Lee, Youngseok Kim, Jonghwa Yi, Chul Park
Abstract:
A landing pier is subjected to safety assessment by visual inspection and design data, but it is difficult to check the damage in real-time. In this study, real - time damage detection and safety evaluation methods were studied. As a result of structural analysis of the arbitrary landing pier structure, the inflection point of deformation and moment occurred at 10%, 50%, and 90% of pile length. The critical value of Fiber Bragg Grating (FBG) sensor was set according to the safety factor, and the FBG sensor application method for real - time safety evaluation was derived.Keywords: FBG sensor, harbor structure, maintenance, safety evaluation system
Procedia PDF Downloads 21818611 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39318610 Ambient Vibration Test and Numerical Modelling of Wind Turbine Towers including Soil Structure Interaction
Authors: Heba Kamal, Ghada Saudi
Abstract:
Due to The rapid expansion of energy and growing number of wind turbines construction in earthquake areas, a design method for simple and accurate evaluation of seismic load to ensure structural integrity is required. In Egypt, there are some appropriate places to build wind turbine towers lie in active seismically regions, so accurate analysis is necessary for prediction of seismic loads with consideration of intensity of the earthquake, soil and structural characteristics. In this research, seismic behavior of wind turbine towers Gamesa Type G52 in Zafarana Wind Farm Egypt is investigated using experimental work by ambient vibration test, and fully dynamic analysis based on time history from El Aqaba Earthquake 1995 using 3D by PLAXIS 3D software, including the soil structure interaction effect. The results obtained from dynamic analyses are discussed. From this study, it is concluded that, the fully dynamic seismic analysis based on used PLAXIS 3D with the aid of the full scale ambient vibration test gives almost good simulation for the seismic loads that can be applied to wind turbine tower design in Egypt.Keywords: Wind turbine towers, Zafarana Wind Farm, Gamesa Type G52, ambient vibration test
Procedia PDF Downloads 20818609 Analysis of Secondary Stage Creep in Thick-Walled Composite Cylinders Subjected to Rotary Inertia
Authors: Tejeet Singh, Virat Khanna
Abstract:
Composite materials have drawn considerable attention of engineers due to their light weight and application at high thermo-mechanical loads. With regard to the prediction of the life of high temperature structural components like rotating cylinders and the evaluation of their deterioration with time, it is essential to have a full knowledge of creep characteristics of these materials. Therefore, in the present study the secondary stage creep stresses and strain rates are estimated in thick-walled composite cylinders subjected to rotary inertia at different angular speeds. The composite cylinder is composed of aluminum matrix (Al) and reinforced with silicon carbide (SiC) particles which are uniformly mixed. The creep response of the material of the cylinder is described by threshold stress based creep law. The study indicates that with the increase in angular speed, the radial, tangential, axial and effective stress increases to a significant value. However, the radial stress remains zero at inner radius and outer radius due to imposed boundary conditions of zero pressure. Further, the stresses are tensile in nature throughout the entire radius of composite cylinder. The strain rates are also influenced in the same manner as that of creep stresses. The creep rates will increase significantly with the increase of centrifugal force on account of rotation.Keywords: composite, creep, rotating cylinder, angular speed
Procedia PDF Downloads 44518608 Spatial Analysis of Park and Ride Users’ Dynamic Accessibility to Train Station: A Case Study in Perth
Authors: Ting (Grace) Lin, Jianhong (Cecilia) Xia, Todd Robinson
Abstract:
Accessibility analysis, examining people’s ability to access facilities and destinations, is a fundamental assessment for transport planning, policy making, and social exclusion research. Dynamic accessibility which measures accessibility in real-time traffic environment has been an advanced accessibility indicator in transport research. It is also a useful indicator to help travelers to understand travel time daily variability, assists traffic engineers to monitor traffic congestions, and finally develop effective strategies in order to mitigate traffic congestions. This research involved real-time traffic information by collecting travel time data with 15-minute interval via the TomTom® API. A framework for measuring dynamic accessibility was then developed based on the gravity theory and accessibility dichotomy theory through space and time interpolation. Finally, the dynamic accessibility can be derived at any given time and location under dynamic accessibility spatial analysis framework.Keywords: dynamic accessibility, hot spot, transport research, TomTom® API
Procedia PDF Downloads 388