Search results for: edge detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7074

Search results for: edge detection algorithm

3714 Effect of Internal Heat Generation on Free Convective Power Law Variable Temperature Past Vertical Plate Considering Exponential Variable Viscosity and Thermal Diffusivity

Authors: Tania Sharmin Khaleque, Mohammad Ferdows

Abstract:

The flow and heat transfer characteristics of a convection with temperature-dependent viscosity and thermal diffusivity along a vertical plate with internal heat generation effect have been studied. The plate temperature is assumed to follow a power law of the distance from the leading edge. The resulting governing two-dimensional equations are transformed using suitable transformations and then solved numerically by using fifth order Runge-Kutta-Fehlberg scheme with a modified version of the Newton-Raphson shooting method. The effects of the various parameters such as variable viscosity parameter β_1, the thermal diffusivity parameter β_2, heat generation parameter c and the Prandtl number Pr on the velocity and temperature profiles, as well as the local skin- friction coefficient and the local Nusselt number are presented in tabular form. Our results suggested that the presence of internal heat generation leads to increase flow than that of without exponentially decaying heat generation term.

Keywords: free convection, heat generation, thermal diffusivity, variable viscosity

Procedia PDF Downloads 341
3713 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 111
3712 Resolution and Experimental Validation of the Asymptotic Model of a Viscous Laminar Supersonic Flow around a Thin Airfoil

Authors: Eddegdag Nasser, Naamane Azzeddine, Radouani Mohammed, Ensam Meknes

Abstract:

In this study, we are interested in the asymptotic modeling of the two-dimensional stationary supersonic flow of a viscous compressible fluid around wing airfoil. The aim of this article is to solve the partial differential equations of the flow far from the leading edge and near the wall using the triple-deck technique is what brought again in precision according to the principle of least degeneration. In order to validate our theoretical model, these obtained results will be compared with the experimental results. The comparison of the results of our model with experimentation has shown that they are quantitatively acceptable compared to the obtained experimental results. The experimental study was conducted using the AF300 supersonic wind tunnel and a NACA Reduced airfoil model with two pressure Taps on extrados. In this experiment, we have considered the incident upstream supersonic Mach number over a dissymmetric NACA airfoil wing. The validation and the accuracy of the results support our model.

Keywords: supersonic, viscous, triple deck technique, asymptotic methods, AF300 supersonic wind tunnel, reduced airfoil model

Procedia PDF Downloads 226
3711 Isolation and Molecular Detection of Marek’s Disease Virus from Outbreak Cases in Chicken in South Western Ethiopia

Authors: Abdela Bulbula

Abstract:

Background: Marek’s disease virus is a devastating infection, causing high morbidity and mortality in chickens in Ethiopia. Methods: The current study was conducted from March to November, 2021 with the general objective of performing antemortem and postmortem, isolation, and molecular detection of Marek’s disease virus from outbreak cases in southwestern Ethiopia. Accordingly, based on outbreak information reported from the study sites namely, Bedelle, Yayo, and Bonga towns in southwestern Ethiopia, 50 sick chickens were sampled. The backyard and intensive farming systems of chickens were included in the sampling and priorities were given for chickens that showed clinical signs that are characteristics of Marek’s disease. Results: By clinical examinations, paralysis of legs and wings, gray eye, loss of weight, difficulty in breathing, and depression were recorded on all chickens sampled for this study and death of diseased chickens was observed. In addition, enlargement of the spleen and gross lesions of the liver and heart were recorded during postmortem examination. The death of infected chickens was observed in both vaccinated and non-vaccinated flocks. Out of 50 pooled feather follicle samples, Marek’s disease virus was isolated from 14/50 (28%) by cell culture method and out of six tissue samples, the virus was isolated from 5/6(83.30%). By Real time polymerization chain reaction technique, which was targeted to detect the Meq gene, Marek’s disease virus was detected from 18/50 feather follicles which accounts for 36% of sampled chickens. Conclusion: In general, the current study showed that the circulating Marek’s disease virus in southwestern Ethiopia was caused by the oncogenic Gallid herpesvirus-2 (Serotype-1). Further research on molecular characterization of revolving virus in current and other regions is recommended for effective control of the disease through vaccination.

Keywords: Ethioi, Marek's disease, isolation, molecular

Procedia PDF Downloads 55
3710 'Sea Power: Concept, Influence and Securitization'; the Nigerian Navy's Role in a Developing State like Nigeria

Authors: William Abiodun Duyile

Abstract:

It is common knowledge that marine food has always been found from the sea, energy can also be found underneath and, to a growing extent; other mineral resources have come from the sea spaces. It is the importance of the sea and the sea lines of communication to littoral nations that has made concepts such as sea power, naval power, etc., significant to them. The study relied on documentary data. The documentary data were sourced from government annual departmental reports, newspapers and correspondence. The secondary sources used were subjected to internal and external criticism for authentication, and then to textual and contextual analyses. The study found that the differential level of seamanship amongst states defined their relationship. It was sea power that gave some states an edge over the others. The study proves that over the ages sea power has been core to the development of States or Empires. The study found that the Nigerian Navy was centre to Nigeria’s conquest of the littoral areas of Biafra, like Bonny, Port-Harcourt, and Calabar; it was also an important turning point of the Nigerian civil war since by it Biafra became landlocked. The research was able to identify succinctly the Nigerian Navy’s contribution to the security and development of the Nigerian State.

Keywords: sea power, naval power, land locked states, warship

Procedia PDF Downloads 131
3709 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 266
3708 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 255
3707 Technical Parameters Evaluation for Caps to Apucarana/Parana - Brazil APL

Authors: Cruz, G. P., Nagamatsu, R. N., Scacchetti, F. A. P., Merlin, F. K.

Abstract:

This study aims to assess a set of technical parameters that provide quality products to the companies that produce caps, APL Apucarana / PR, the city that produces most Brazilian caps, in order to verify the potential of Brazilian caps to compete with international brands, recognized by the standard of excellence when it comes to quality of its products. The determination of the technical parameters was arbitrated from textile ABNT, a total of six technical parameters, providing eight tests for cotton caps. For the evaluation, we used as reference a leading brand recognized worldwide (based on their sales volume in $) for comparison with 3 companies of the APL Apucarana. The results showed that, of the 8 tests, of 8 tests, the companies Apucarana did not obtain better performance than the competitor. They obtained the same results in three tests and lower performance in 5. Given these values, it is concluded that local caps are not far from reaching the quality of leading brand. It is recommended that the APL companies use the parameters to evaluate their products, using this information to support decision-making that seek to improve both the product design and its production process, enabling the feasibility for faster international recognition . Thus, they may have an edge over its main competitor.

Keywords: technical parameters, making caps, quality, evaluation

Procedia PDF Downloads 333
3706 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 312
3705 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 116
3704 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 574
3703 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 275
3702 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 235
3701 An Experience of HIV Testing and Counseling Services at a Tertiary Care Center of Bangladesh

Authors: S. M. Rashed Ul Islam, Shahina Tabassum, Afsana Anwar Miti

Abstract:

Objective: HIV testing and counseling center (HTC) is an important component of the HIV/AIDS detection, prevention and control interventions. The service was first initiated at the Department of Virology, Bangabandhu Sheikh Mujib Medical University (BSMMU) since the first case detection in 1989. The present study aimed to describe the demographic profile among the attendees tested HIV positive. Methods: The present study was carried out among 219 HIV positive cases detected through screening at the Department of Virology of BSMMU during the year of 2012-2016. Data were collected through pre-structured written questionnaire during the counseling session. Data were expressed as frequency and percentages and analyzed using SPSS v20.0 program. Results: Out of 219 HIV cases detected, 77.6% were males, and 22.4% were females with a mean age (mean±SD) of 35.46±9.46 years. Among them, 70.7% belonged to the 26-45 age groups representing the sexually active age. The majority of the cases were married (86.3%) and 49.8% had primary level of education whereas, 8.7% were illiterate. Nearly 42% of cases were referred from Chittagong division (south-east part of the country) followed by Dhaka division (35.6%). The bulk of study population admitted to involvement in high-risk behaviour (90%) in the past and 42% of them had worked overseas. The Pearson Chi-square (χ2) analysis revealed significant relationship of gender with marital (χ2=7.88 at 2% level) and occupation status (χ2=120.48 at 6% level); however, no association was observed with risk behaviour and educational status. Recommendations: HIV risk behavior was found to be a prime source for HIV infection among the study population. So, there is need for health education and awareness program to bring about behavioral changes to halt the yearly increase of new cases in the country with special attention to our overseas workers on HIV/AIDS risk and safety.

Keywords: Bangladesh, health education, HIV testing and counseling (HTC), HIV/AIDS, risk behavior

Procedia PDF Downloads 285
3700 Analytical Model of Multiphase Machines Under Electrical Faults: Application on Dual Stator Asynchronous Machine

Authors: Nacera Yassa, Abdelmalek Saidoune, Ghania Ouadfel, Hamza Houassine

Abstract:

The rapid advancement in electrical technologies has underscored the increasing importance of multiphase machines across various industrial sectors. These machines offer significant advantages in terms of efficiency, compactness, and reliability compared to their single-phase counterparts. However, early detection and diagnosis of electrical faults remain critical challenges to ensure the durability and safety of these complex systems. This paper presents an advanced analytical model for multiphase machines, with a particular focus on dual stator asynchronous machines. The primary objective is to develop a robust diagnostic tool capable of effectively detecting and locating electrical faults in these machines, including short circuits, winding faults, and voltage imbalances. The proposed methodology relies on an analytical approach combining electrical machine theory, modeling of magnetic and electrical circuits, and advanced signal analysis techniques. By employing detailed analytical equations, the developed model accurately simulates the behavior of multiphase machines in the presence of electrical faults. The effectiveness of the proposed model is demonstrated through a series of case studies and numerical simulations. In particular, special attention is given to analyzing the dynamic behavior of machines under different types of faults, as well as optimizing diagnostic and recovery strategies. The obtained results pave the way for new advancements in the field of multiphase machine diagnostics, with potential applications in various sectors such as automotive, aerospace, and renewable energies. By providing precise and reliable tools for early fault detection, this research contributes to improving the reliability and durability of complex electrical systems while reducing maintenance and operation costs.

Keywords: faults, diagnosis, modelling, multiphase machine

Procedia PDF Downloads 48
3699 Dispersion-Less All Reflective Split and Delay Unit for Ultrafast Metrology

Authors: Akansha Tyagi, Mehar S. Sidhu, Ankur Mandal, Sanjay Kapoor, Sunil Dahiya, Jan M. Rost, Thomas Pfeifer, Kamal P. Singh

Abstract:

An all-reflective split and delay unit is designed for dispersion free measurement of broadband ultrashort pulses using a pair of reflective knife edge prism for splitting and recombining of the measuring pulse. It is based on symmetrical wavefront splitting of the measuring pulse having two separate arms to independently shape both split parts. We have validated our delay line with NIR –femtosecond pulse measurement centered at 800 nm using second harmonic-Interferometric frequency resolved optical gating (SH-IFROG). The delay line is compact, easy to align and provides attosecond stability and precision and thus make it more versatile for wide range of applications in ultrafast measurements. We envision that the present delay line will find applications in IR-IR controlling for high harmonic generation (HHG) and attosecond IR-XUV pump-probe measurements with solids and gases providing attosecond resolution and wide delay range.

Keywords: HHG, nonlinear optics, pump-probe spectroscopy, ultrafast metrology

Procedia PDF Downloads 185
3698 Experimental Study Damage in a Composite Structure by Vibration Analysis- Glass / Polyester

Authors: R. Abdeldjebar, B. Labbaci, L. Missoum, B. Moudden, M. Djermane

Abstract:

The basic components of a composite material made him very sensitive to damage, which requires techniques for detecting damage reliable and efficient. This work focuses on the detection of damage by vibration analysis, whose main objective is to exploit the dynamic response of a structure to detect understand the damage. The experimental results are compared with those predicted by numerical models to confirm the effectiveness of the approach.

Keywords: experimental, composite, vibration analysis, damage

Procedia PDF Downloads 663
3697 In-Situ Studies of Cyclohexane Oxidation Using Laser Raman Spectroscopy for the Refinement of Mechanism Based Kinetic Models

Authors: Christine Fräulin, Daniela Schurr, Hamed Shahidi Rad, Gerrit Waters, Günter Rinke, Roland Dittmeyer, Michael Nilles

Abstract:

The reaction mechanisms of many liquid-phase reactions in organic chemistry have not yet been sufficiently clarified. Process conditions of several hundred degrees celsius and pressures to ten megapascals complicate the sampling and the determination of kinetic data. Space resolved in-situ measurements promises new insights. A non-invasive in-situ measurement technique has the advantages that no sample preparation is necessary, there is no change in sample mixture before analysis and the sampling do no lead to interventions in the flow. Thus, the goal of our research was the development of a contact-free spatially resolved measurement technique for kinetic studies of liquid phase reaction under process conditions. Therefore we used laser Raman spectroscopy combined with an optical transparent microchannel reactor. To show the performance of the system we choose the oxidation of cyclohexane as sample reaction. Cyclohexane oxidation is an economically important process. The products are intermediates for caprolactam and adipic acid, which are starting materials for polyamide 6 and 6.6 production. To maintain high selectivities of 70 to 90 %, the reaction is performed in industry at a low conversion of about six percent. As Raman spectroscopy is usually very selective but not very sensitive the detection of the small product concentration in cyclohexane oxidation is quite challenging. To meet these requirements, an optical experimental setup was optimized to determine the concentrations by laser Raman spectroscopy with respect to good detection sensitivity. With this measurement technique space resolved kinetic studies of uncatalysed and homogeneous catalyzed cyclohexane oxidation were carried out to obtain details about the reaction mechanism.

Keywords: in-situ laser raman spectroscopy, space resolved kinetic measurements, homogeneous catalysis, chemistry

Procedia PDF Downloads 325
3696 Standard Protocol Selection for Acquisition of Breast Thermogram in Perspective of Early Breast Cancer Detection

Authors: Mrinal Kanti Bhowmik, Usha Rani Gogoi Jr., Anjan Kumar Ghosh, Debotosh Bhattacharjee

Abstract:

In the last few decades, breast thermography has achieved an average sensitivity and specificity of 90% for breast tumor detection. Breast thermography is a non-invasive, cost-effective, painless and radiation-free breast imaging modality which makes a significant contribution to the evaluation and diagnosis of patients, suspected of having breast cancer. An abnormal breast thermogram may indicate significant biological risk for the existence or the development of breast tumors. Breast thermography can detect a breast tumor, when the tumor is in its early stage or when the tumor is in a dense breast. The infrared breast thermography is very sensitive to environmental changes for which acquisition of breast thermography should be performed under strictly controlled conditions by undergoing some standard protocols. Several factors like air, temperature, humidity, etc. are there to be considered for characterizing thermal images as an imperative tool for detecting breast cancer. A detailed study of various breast thermogram acquisition protocols adopted by different researchers in their research work is provided here in this paper. After going through a rigorous study of different breast thermogram acquisition protocols, a new standard breast thermography acquisition setup is proposed here in this paper for proper and accurate capturing of the breast thermograms. The proposed breast thermogram acquisition setup is being built in the Radiology Department, Agartala Government Medical College (AGMC), Govt. of Tripura, Tripura, India. The breast thermograms are captured using FLIR T650sc thermal camera with the thermal sensitivity of 20 mK at 30 degree C. The paper is an attempt to highlight the importance of different critical parameters of breast thermography like different thermography views, patient preparation protocols, acquisition room requirements, acquisition system requirements, etc. This paper makes an important contribution by providing a detailed survey and a new efficient approach on breast thermogram capturing.

Keywords: acquisition protocol, breast cancer, breast thermography, infrared thermography

Procedia PDF Downloads 388
3695 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 112
3694 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 294
3693 Practice of Supply Chain Management in Local SMEs

Authors: Oualid Kherbach, Marian Liviu Mocan, Amine Ghoumrassi, Cristian Dumitrache

Abstract:

The Globalization system and the development of economy, e-business, and introduction of new technologies formation create new challenges to all organizations particularly for small and medium enterprises (SMEs). Many studies on supply chain management (SCM) focus on large companies with universal operations employing high-stage information technology. These make a gap in the knowing of how SMEs use and practice supply chain management. In this screenplay, successful practices of supply chain management (SCM) can give SMEs an edge over their competitors. However, SMEs in Romania and Balkan countries face problems in SCM implementation and practices due to lack of resources and direction. The objectives of this research highlight the supply chain management practices of the small and medium enterprise strip in Romania and understand how SMEs manage and use SCM. This study Checks the potential existence of systematic differences between small businesses and medium-sized businesses with regard to supply chain management practices and the application of supply management has contributed to the improvement performance and increase the profitability of companies such as increasing the market share and improving the level of clients.

Keywords: globalization, small and medium enterprises, supply chain management, practices

Procedia PDF Downloads 357
3692 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 133
3691 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 306
3690 Analysis of Nonlinear Pulse Propagation Characteristics in Semiconductor Optical Amplifier for Different Input Pulse Shapes

Authors: Suchi Barua, Narottam Das, Sven Nordholm, Mohammad Razaghi

Abstract:

This paper presents nonlinear pulse propagation characteristics for different input optical pulse shapes with various input pulse energy levels in semiconductor optical amplifiers. For simulation of nonlinear pulse propagation, finite-difference beam propagation method is used to solve the nonlinear Schrödinger equation. In this equation, gain spectrum dynamics, gain saturation are taken into account which depends on carrier depletion, carrier heating, spectral-hole burning, group velocity dispersion, self-phase modulation and two photon absorption. From this analysis, we obtained the output waveforms and spectra for different input pulse shapes as well as for different input energies. It shows clearly that the peak position of the output waveforms are shifted toward the leading edge which due to the gain saturation of the SOA for higher input pulse energies. We also analyzed and compared the normalized difference of full-width at half maximum for different input pulse shapes in the SOA.

Keywords: finite-difference beam propagation method, pulse shape, pulse propagation, semiconductor optical amplifier

Procedia PDF Downloads 596
3689 Bounds on the Laplacian Vertex PI Energy

Authors: Ezgi Kaya, A. Dilek Maden

Abstract:

A topological index is a number related to graph which is invariant under graph isomorphism. In theoretical chemistry, molecular structure descriptors (also called topological indices) are used for modeling physicochemical, pharmacologic, toxicologic, biological and other properties of chemical compounds. Let G be a graph with n vertices and m edges. For a given edge uv, the quantity nu(e) denotes the number of vertices closer to u than v, the quantity nv(e) is defined analogously. The vertex PI index defined as the sum of the nu(e) and nv(e). Here the sum is taken over all edges of G. The energy of a graph is defined as the sum of the eigenvalues of adjacency matrix of G and the Laplacian energy of a graph is defined as the sum of the absolute value of difference of laplacian eigenvalues and average degree of G. In theoretical chemistry, the π-electron energy of a conjugated carbon molecule, computed using the Hückel theory, coincides with the energy. Hence results on graph energy assume special significance. The Laplacian matrix of a graph G weighted by the vertex PI weighting is the Laplacian vertex PI matrix and the Laplacian vertex PI eigenvalues of a connected graph G are the eigenvalues of its Laplacian vertex PI matrix. In this study, Laplacian vertex PI energy of a graph is defined of G. We also give some bounds for the Laplacian vertex PI energy of graphs in terms of vertex PI index, the sum of the squares of entries in the Laplacian vertex PI matrix and the absolute value of the determinant of the Laplacian vertex PI matrix.

Keywords: energy, Laplacian energy, laplacian vertex PI eigenvalues, Laplacian vertex PI energy, vertex PI index

Procedia PDF Downloads 232
3688 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 119
3687 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 270
3686 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error

Authors: Oscar Javier Herrera, Manuel Angel Camacho

Abstract:

This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors’. The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.

Keywords: demand forecasting, empirical distribution, propagation of error, Bogota

Procedia PDF Downloads 621
3685 An Advanced Approach to Detect and Enumerate Soil-Transmitted Helminth Ova from Wastewater

Authors: Vivek B. Ravindran, Aravind Surapaneni, Rebecca Traub, Sarvesh K. Soni, Andrew S. Ball

Abstract:

Parasitic diseases have a devastating, long-term impact on human health and welfare. More than two billion people are infected with soil-transmitted helminths (STHs), including the roundworms (Ascaris), hookworms (Necator and Ancylostoma) and whipworm (Trichuris) with majority occurring in the tropical and subtropical regions of the world. Despite its low prevalence in developed countries, the removal of STHs from wastewater remains crucial to allow the safe use of sludge or recycled water in agriculture. Conventional methods such as incubation and optical microscopy are cumbersome; consequently, the results drastically vary from person-to-person observing the ova (eggs) under microscope. Although PCR-based methods are an alternative to conventional techniques, it lacks the ability to distinguish between viable and non-viable helminth ova. As a result, wastewater treatment industries are in major need for radically new and innovative tools to detect and quantify STHs eggs with precision, accuracy and being cost-effective. In our study, we focus on the following novel and innovative techniques: -Recombinase polymerase amplification and Surface enhanced Raman spectroscopy (RPA-SERS) based detection of helminth ova. -Use of metal nanoparticles and their relative nanozyme activity. -Colorimetric detection, differentiation and enumeration of genera of helminth ova using hydrolytic enzymes (chitinase and lipase). -Propidium monoazide (PMA)-qPCR to detect viable helminth ova. -Modified assay to recover and enumerate helminth eggs from fresh raw sewage. -Transcriptome analysis of ascaris ova in fresh raw sewage. The aforementioned techniques have the potential to replace current conventional and molecular methods thereby producing a standard protocol for the determination and enumeration of helminth ova in sewage sludge.

Keywords: colorimetry, helminth, PMA-QPCR, nanoparticles, RPA, viable

Procedia PDF Downloads 292