Search results for: edge detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7074

Search results for: edge detection algorithm

4044 Enhancing Throughput for Wireless Multihop Networks

Authors: K. Kalaiarasan, B. Pandeeswari, A. Arockia John Francis

Abstract:

Wireless, Multi-hop networks consist of one or more intermediate nodes along the path that receive and forward packets via wireless links. The backpressure algorithm provides throughput optimal routing and scheduling decisions for multi-hop networks with dynamic traffic. Xpress, a cross-layer backpressure architecture was designed to reach the capacity of wireless multi-hop networks and it provides well coordination between layers of network by turning a mesh network into a wireless switch. Transmission over the network is scheduled using a throughput-optimal backpressure algorithm. But this architecture operates much below their capacity due to out-of-order packet delivery and variable packet size. In this paper, we present Xpress-T, a throughput optimal backpressure architecture with TCP support designed to reach maximum throughput of wireless multi-hop networks. Xpress-T operates at the IP layer, and therefore any transport protocol, including TCP, can run on top of Xpress-T. The proposed design not only avoids bottlenecks but also handles out-of-order packet delivery and variable packet size, optimally load-balances traffic across them when needed, improving fairness among competing flows. Our simulation results shows that Xpress-T gives 65% more throughput than Xpress.

Keywords: backpressure scheduling and routing, TCP, congestion control, wireless multihop network

Procedia PDF Downloads 511
4043 Automatic Adjustment of Thresholds via Closed-Loop Feedback Mechanism for Solder Paste Inspection

Authors: Chia-Chen Wei, Pack Hsieh, Jeffrey Chen

Abstract:

Surface Mount Technology (SMT) is widely used in the area of the electronic assembly in which the electronic components are mounted to the surface of the printed circuit board (PCB). Most of the defects in the SMT process are mainly related to the quality of solder paste printing. These defects lead to considerable manufacturing costs in the electronics assembly industry. Therefore, the solder paste inspection (SPI) machine for controlling and monitoring the amount of solder paste printing has become an important part of the production process. So far, the setting of the SPI threshold is based on statistical analysis and experts’ experiences to determine the appropriate threshold settings. Because the production data are not normal distribution and there are various variations in the production processes, defects related to solder paste printing still occur. In order to solve this problem, this paper proposes an online machine learning algorithm, called the automatic threshold adjustment (ATA) algorithm, and closed-loop architecture in the SMT process to determine the best threshold settings. Simulation experiments prove that our proposed threshold settings improve the accuracy from 99.85% to 100%.

Keywords: big data analytics, Industry 4.0, SPI threshold setting, surface mount technology

Procedia PDF Downloads 106
4042 Proposal Method of Prediction of the Early Stages of Dementia Using IoT and Magnet Sensors

Authors: João Filipe Papel, Tatsuji Munaka

Abstract:

With society's aging and the number of elderly with dementia rising, researchers have been actively studying how to support the elderly in the early stages of dementia with the objective of allowing them to have a better life quality and as much as possible independence. To make this possible, most researchers in this field are using the Internet Of Things to monitor the elderly activities and assist them in performing them. The most common sensor used to monitor the elderly activities is the Camera sensor due to its easy installation and configuration. The other commonly used sensor is the sound sensor. However, we need to consider privacy when using these sensors. This research aims to develop a system capable of predicting the early stages of dementia based on monitoring and controlling the elderly activities of daily living. To make this system possible, some issues need to be addressed. First, the issue related to elderly privacy when trying to detect their Activities of Daily Living. Privacy when performing detection and monitoring Activities of Daily Living it's a serious concern. One of the purposes of this research is to achieve this detection and monitoring without putting the privacy of the elderly at risk. To make this possible, the study focuses on using an approach based on using Magnet Sensors to collect binary data. The second is to use the data collected by monitoring Activities of Daily Living to predict the early stages of Dementia. To make this possible, the research team suggests developing a proprietary ontology combined with both data-driven and knowledge-driven.

Keywords: dementia, activity recognition, magnet sensors, ontology, data driven and knowledge driven, IoT, activities of daily living

Procedia PDF Downloads 90
4041 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 246
4040 CLOUD Japan: Prospective Multi-Hospital Study to Determine the Population-Based Incidence of Hospitalized Clostridium difficile Infections

Authors: Kazuhiro Tateda, Elisa Gonzalez, Shuhei Ito, Kirstin Heinrich, Kevin Sweetland, Pingping Zhang, Catia Ferreira, Michael Pride, Jennifer Moisi, Sharon Gray, Bennett Lee, Fred Angulo

Abstract:

Clostridium difficile (C. difficile) is the most common cause of antibiotic-associated diarrhea and infectious diarrhea in healthcare settings. Japan has an aging population; the elderly are at increased risk of hospitalization, antibiotic use, and C. difficile infection (CDI). Little is known about the population-based incidence and disease burden of CDI in Japan although limited hospital-based studies have reported a lower incidence than the United States. To understand CDI disease burden in Japan, CLOUD (Clostridium difficile Infection Burden of Disease in Adults in Japan) was developed. CLOUD will derive population-based incidence estimates of the number of CDI cases per 100,000 population per year in Ota-ku (population 723,341), one of the districts in Tokyo, Japan. CLOUD will include approximately 14 of the 28 Ota-ku hospitals including Toho University Hospital, which is a 1,000 bed tertiary care teaching hospital. During the 12-month patient enrollment period, which is scheduled to begin in November 2018, Ota-ku residents > 50 years of age who are hospitalized at a participating hospital with diarrhea ( > 3 unformed stools (Bristol Stool Chart 5-7) in 24 hours) will be actively ascertained, consented, and enrolled by study surveillance staff. A stool specimen will be collected from enrolled patients and tested at a local reference laboratory (LSI Medience, Tokyo) using QUIK CHEK COMPLETE® (Abbott Laboratories). which simultaneously tests specimens for the presence of glutamate dehydrogenase (GDH) and C. difficile toxins A and B. A frozen stool specimen will also be sent to the Pfizer Laboratory (Pearl River, United States) for analysis using a two-step diagnostic testing algorithm that is based on detection of C. difficile strains/spores harboring toxin B gene by PCR followed by detection of free toxins (A and B) using a proprietary cell cytotoxicity neutralization assay (CCNA) developed by Pfizer. Positive specimens will be anaerobically cultured, and C. difficile isolates will be characterized by ribotyping and whole genomic sequencing. CDI patients enrolled in CLOUD will be contacted weekly for 90 days following diarrhea onset to describe clinical outcomes including recurrence, reinfection, and mortality, and patient reported economic, clinical and humanistic outcomes (e.g., health-related quality of life, worsening of comorbidities, and patient and caregiver work absenteeism). Studies will also be undertaken to fully characterize the catchment area to enable population-based estimates. The 12-month active ascertainment of CDI cases among hospitalized Ota-ku residents with diarrhea in CLOUD, and the characterization of the Ota-ku catchment area, including estimation of the proportion of all hospitalizations of Ota-ku residents that occur in the CLOUD-participating hospitals, will yield CDI population-based incidence estimates, which can be stratified by age groups, risk groups, and source (hospital-acquired or community-acquired). These incidence estimates will be extrapolated, following age standardization using national census data, to yield CDI disease burden estimates for Japan. CLOUD also serves as a model for studies in other countries that can use the CLOUD protocol to estimate CDI disease burden.

Keywords: Clostridium difficile, disease burden, epidemiology, study protocol

Procedia PDF Downloads 250
4039 Diagnosis of Induction Machine Faults by DWT

Authors: Hamidreza Akbari

Abstract:

In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.

Keywords: induction machine, fault, DWT, electric

Procedia PDF Downloads 341
4038 Classification of Myoelectric Signals Using Multilayer Perceptron Neural Network with Back-Propagation Algorithm in a Wireless Surface Myoelectric Prosthesis of the Upper-Limb

Authors: Kevin D. Manalo, Jumelyn L. Torres, Noel B. Linsangan

Abstract:

This paper focuses on a wireless myoelectric prosthesis of the upper-limb that uses a Multilayer Perceptron Neural network with back propagation. The algorithm is widely used in pattern recognition. The network can be used to train signals and be able to use it in performing a function on their own based on sample inputs. The paper makes use of the Neural Network in classifying the electromyography signal that is produced by the muscle in the amputee’s skin surface. The gathered data will be passed on through the Classification Stage wirelessly through Zigbee Technology. The signal will be classified and trained to be used in performing the arm positions in the prosthesis. Through programming using Verilog and using a Field Programmable Gate Array (FPGA) with Zigbee, the EMG signals will be acquired and will be used for classification. The classified signal is used to produce the corresponding Hand Movements (Open, Pick, Hold, and Grip) through the Zigbee controller. The data will then be processed through the MLP Neural Network using MATLAB which then be used for the surface myoelectric prosthesis. Z-test will be used to display the output acquired from using the neural network.

Keywords: field programmable gate array, multilayer perceptron neural network, verilog, zigbee

Procedia PDF Downloads 382
4037 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 246
4036 Diagnosis of Choledocholithiasis with Endosonography

Authors: A. Kachmazova, A. Shadiev, Y. Teterin, P. Yartcev

Abstract:

Introduction: Biliary calculi disease (LCS) still occupies the leading position among urgent diseases of the abdominal cavity, manifesting itself from asymptomatic course to life-threatening states. Nowadays arsenal of diagnostic methods for choledocholithiasis is quite wide: ultrasound, hepatobiliscintigraphy (HBSG), magnetic resonance imaging (MRI), endoscopic retrograde cholangiography (ERCP). Among them, transabdominal ultrasound (TA ultrasound) is the most accessible and routine diagnostic method. Nowadays ERCG is the "gold" standard in diagnosis and one-stage treatment of biliary tract obstruction. However, transpapillary techniques are accompanied by serious postoperative complications (postmanipulative pancreatitis (3-5%), endoscopic papillosphincterotomy bleeding (2%), cholangitis (1%)), the lethality being 0.4%. GBSG and MRI are also quite informative methods in the diagnosis of choledocholithiasis. Small size of concrements, their localization in intrapancreatic and retroduodenal part of common bile duct significantly reduces informativity of all diagnostic methods described above, that demands additional studying of this problem. Materials and Methods: 890 patients with the diagnosis of cholelithiasis (calculous cholecystitis) were admitted to the Sklifosovsky Scientific Research Institute of Hospital Medicine in the period from August, 2020 to June, 2021. Of them 115 people with mechanical jaundice caused by concrements in bile ducts. Results: Final EUS diagnosis was made in all patients (100,0%). In all patients in whom choledocholithiasis diagnosis was revealed or confirmed after EUS, ERCP was performed urgently (within two days from the moment of its detection) as the X-ray operation room was provided; it confirmed the presence of concrements. All stones were removed by lithoextraction using Dormia basket. The postoperative period in these patients had no complications. Conclusions: EUS is the most informative and safe diagnostic method, which allows to detect choledocholithiasis in patients with discrepancies between clinical-laboratory and instrumental methods of diagnosis in shortest time, that in its turn will help to decide promptly on the further tactics of patient treatment. We consider it reasonable to include EUS in the diagnostic algorithm for choledocholithiasis. Disclosure: Nothing to disclose.

Keywords: endoscopic ultrasonography, choledocholithiasis, common bile duct, concrement, ERCP

Procedia PDF Downloads 76
4035 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis

Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim

Abstract:

The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.

Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection

Procedia PDF Downloads 392
4034 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax

Authors: Svitov David, Alyamkin Sergey

Abstract:

The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.

Keywords: ArcFace, distillation, face recognition, margin-based softmax

Procedia PDF Downloads 137
4033 Adaptive Power Control of the City Bus Integrated Photovoltaic System

Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker

Abstract:

This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.

Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter

Procedia PDF Downloads 203
4032 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.

Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes

Procedia PDF Downloads 296
4031 Stray Light Reduction Methodology by a Sinusoidal Light Modulation and Three-Parameter Sine Curve Fitting Algorithm for a Reflectance Spectrometer

Authors: Hung Chih Hsieh, Cheng Hao Chang, Yun Hsiang Chang, Yu Lin Chang

Abstract:

In the applications of the spectrometer, the stray light that comes from the environment affects the measurement results a lot. Hence, environment and instrument quality control for the stray reduction is critical for the spectral reflectance measurement. In this paper, a simple and practical method has been developed to correct a spectrometer's response for measurement errors arising from the environment's and instrument's stray light. A sinusoidal modulated light intensity signal was incident on a tested sample, and then the reflected light was collected by the spectrometer. Since a sinusoidal signal modulated the incident light, the reflected light also had a modulated frequency which was the same as the incident signal. Using the three-parameter sine curve fitting algorithm, we can extract the primary reflectance signal from the total measured signal, which contained the primary reflectance signal and the stray light from the environment. The spectra similarity between the extracted spectra by this proposed method with extreme environment stray light is 99.98% similar to the spectra without the environment's stray light. This result shows that we can measure the reflectance spectra without the affection of the environment's stray light.

Keywords: spectrometer, stray light, three-parameter sine curve fitting, spectra extraction

Procedia PDF Downloads 229
4030 'Utsadhara': Rejuvenating the Dead River Edge into an Urban Activity Space along the Banks of River Hooghly

Authors: Aparna Saha, Tuhin Ahmed

Abstract:

West Bengal has a number of important rivers, each with its distinctive character and a story. Traditionally, cities have ‘divulged’ to rivers at the river edges and rivers have been an inseparable part of the urban experience. Considering the research aspect, the area is taken in Barrackpore, a small but important outgrowth of Kolkata Municipal Association, West Bengal. Barrackpore, at present, has ample inadequate public open spaces at the neighborhood level where people of different socio-cultural, economic, and religious backgrounds can come together and engage in various leisure activities, but there is no opportunity either, where people can learn about and explore the rich history of the settlement. Pertaining to these issues forms the backdrop of this research paper which has been conceptualized as a place from space that will bring people back to the river and increase community interactions and will also celebrate and commemorate towards the historical importance of the river and its edges. The entire precinct bordering the river represents the transition from pre-independence (Raj era) to Sepoy phase (Swaraj era), finally culminating into the Gandhian philosophy which is being projected into the already existing Gandhi Ghat. The ultimate aim of the paper entitled ‘Utsadhara- Rejuvenating the dead river edge into an urban activity space along the banks of river Hooghly’ is to create a socio-cultural space keeping the heritage identity intact through judicious use of the water body. Also, a balance is kept between the natural ecosystem and the cosmetic development of the surrounding open spaces. It can be duly achieved by the aforementioned methodology provided in the document, but mainly it would focus into preserving the historic ethnicity of the place by holding its character through various facts and figures as well as features. Most importantly the natural topography of the place is left intact. The second priority is given in terms of hierarchy of well connected public plazas, podiums where people from different socio-economic backgrounds irrespective of age and sex could socialize and reach towards venturing into a cordial relationship with one another. The third priority is to provide a platform for the common mass for showcasing their skills and talent through different art and craft forms which in turn would enhance their individual self and also the community as a whole through economic rise. Apart from this here some spaces are created in accordance to different age groups or class of people. The paper intends to see the river as a major multifunctional public space to attract people for different activities and re-establish the relationship of the river with the settlement. Hence, it is apprehended that the paper is not only intended to a simple riverfront conservation project but unlike others it is a place which is created for the people, by the people and of the people towards a holistic community development through a sustainable approach.

Keywords: holistic community development, public activity space, river-urban precinct, urban dead space

Procedia PDF Downloads 123
4029 A Sensitive Approach on Trace Analysis of Methylparaben in Wastewater and Cosmetic Products Using Molecularly Imprinted Polymer

Authors: Soukaina Motia, Nadia El Alami El Hassani, Alassane Diouf, Benachir Bouchikhi, Nezha El Bari

Abstract:

Parabens are the antimicrobial molecules largely used in cosmetic products as a preservative agent. Among them, the methylparaben (MP) is the most frequently used ingredient in cosmetic preparations. Nevertheless, their potential dangers led to the development of sensible and reliable methods for their determination in environmental samples. Firstly, a sensitive and selective molecular imprinted polymer (MIP) based on screen-printed gold electrode (Au-SPE), assembled on a polymeric layer of carboxylated poly(vinyl-chloride) (PVC-COOH), was developed. After the template removal, the obtained material was able to rebind MP and discriminate it among other interfering species such as glucose, sucrose, and citric acid. The behavior of molecular imprinted sensor was characterized by Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS) techniques. Then, the biosensor was found to have a linear detection range from 0.1 pg.mL-1 to 1 ng.mL-1 and a low limit of detection of 0.12 fg.mL-1 and 5.18 pg.mL-1 by DPV and EIS, respectively. For applications, this biosensor was employed to determine MP content in four wastewaters in Meknes city and two cosmetic products (shower gel and shampoo). The operational reproducibility and stability of this biosensor were also studied. Secondly, another MIP biosensor based on tungsten trioxide (WO3) functionalized by gold nanoparticles (Au-NPs) assembled on a polymeric layer of PVC-COOH was developed. The main goal was to increase the sensitivity of the biosensor. The developed MIP biosensor was successfully applied for the MP determination in wastewater samples and cosmetic products.

Keywords: cosmetic products, methylparaben, molecularly imprinted polymer, wastewater

Procedia PDF Downloads 306
4028 Two Years Retrospective Study of Body Fluid Cultures Obtained from Patients in the Intensive Care Unit of General Hospital of Ioannina

Authors: N. Varsamis, M. Gerasimou, P. Christodoulou, S. Mantzoukis, G. Kolliopoulou, N. Zotos

Abstract:

Purpose: Body fluids (pleural, peritoneal, synovial, pericardial, cerebrospinal) are an important element in the detection of microorganisms. For this reason, it is important to examine them in the Intensive Care Unit (ICU) patients. Material and Method: Body fluids are transported through sterile containers and enriched as soon as possible with Tryptic Soy Broth (TSB). After one day of incubation, the broth is poured into selective media: Blood, Mac Conkey No. 2, Chocolate, Mueller Hinton, Chapman and Saboureaud agar. The above selective media are incubated directly for 2 days. After this period, if any number of microbial colonies are detected, gram staining is performed. After that, the isolated organisms are identified by biochemical techniques in the automated Microscan system (Siemens) and followed by a sensitivity test on the same system using the minimum inhibitory concentration MIC technique. The sensitivity test is verified by Kirby Bauer-based plate test. Results: In 2017 the Laboratory of Microbiology received 60 samples of body fluids from the ICU. More specifically the Microbiology Department received 6 peritoneal fluid specimens, 18 pleural fluid specimens and 36 cerebrospinal fluid specimens. 36 positive cultures were tested. S. epidermidis was identified in 18 specimens, S. haemolyticus in 6, and E. faecium in 12. Conclusions: The results show low detection of microorganisms in body fluid cultures.

Keywords: body fluids, culture, intensive care unit, microorganisms

Procedia PDF Downloads 192
4027 The Trajectory of the Ball in Football Game

Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar

Abstract:

Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.

Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter

Procedia PDF Downloads 451
4026 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 292
4025 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks

Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas

Abstract:

This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).

Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems

Procedia PDF Downloads 121
4024 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 110
4023 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs

Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar

Abstract:

The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.

Keywords: simulation, probability, confidence interval, sensitivity analysis

Procedia PDF Downloads 371
4022 Testing Chat-GPT: An AI Application

Authors: Jana Ismail, Layla Fallatah, Maha Alshmaisi

Abstract:

ChatGPT, a cutting-edge language model built on the GPT-3.5 architecture, has garnered attention for its profound natural language processing capabilities, holding promise for transformative applications in customer service and content creation. This study delves into ChatGPT's architecture, aiming to comprehensively understand its strengths and potential limitations. Through systematic experiments across diverse domains, such as general knowledge and creative writing, we evaluated the model's coherence, context retention, and task-specific accuracy. While ChatGPT excels in generating human-like responses and demonstrates adaptability, occasional inaccuracies and sensitivity to input phrasing were observed. The study emphasizes the impact of prompt design on output quality, providing valuable insights for the nuanced deployment of ChatGPT in conversational AI and contributing to the ongoing discourse on the evolving landscape of natural language processing in artificial intelligence.

Keywords: artificial Inelegance, chatGPT, open AI, NLP

Procedia PDF Downloads 66
4021 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals

Procedia PDF Downloads 370
4020 Automatic Generating CNC-Code for Milling Machine

Authors: Chalakorn Chitsaart, Suchada Rianmora, Mann Rattana-Areeyagon, Wutichai Namjaiprasert

Abstract:

G-code is the main factor in computer numerical control (CNC) machine for controlling the tool-paths and generating the profile of the object’s features. For obtaining high surface accuracy of the surface finish, non-stop operation is required for CNC machine. Recently, to design a new product, the strategy that concerns about a change that has low impact on business and does not consume lot of resources has been introduced. Cost and time for designing minor changes can be reduced since the traditional geometric details of the existing models are applied. In order to support this strategy as the alternative channel for machining operation, this research proposes the automatic generating codes for CNC milling operation. Using this technique can assist the manufacturer to easily change the size and the geometric shape of the product during the operation where the time spent for setting up or processing the machine are reduced. The algorithm implemented on MATLAB platform is developed by analyzing and evaluating the geometric information of the part. Codes are created rapidly to control the operations of the machine. Comparing to the codes obtained from CAM, this developed algorithm can shortly generate and simulate the cutting profile of the part.

Keywords: geometric shapes, milling operation, minor changes, CNC Machine, G-code, cutting parameters

Procedia PDF Downloads 342
4019 Post Growth Annealing Effect on Deep Level Emission and Raman Spectra of Hydrothermally Grown ZnO Nanorods Assisted by KMnO4

Authors: Ashish Kumar, Tejendra Dixit, I. A. Palani, Vipul Singh

Abstract:

Zinc oxide, with its interesting properties such as large band gap (3.37eV), high exciton binding energy (60 meV) and intense UV absorption has been studied in literature for various applications viz. optoelectronics, biosensors, UV-photodetectors etc. The performance of ZnO devices is highly influenced by morphologies, size, crystallinity of the ZnO active layer and processing conditions. Recently, our group has shown the influence of the in situ addition of KMnO4 in the precursor solution during the hydrothermal growth of ZnO nanorods (NRs) on their near band edge (NBE) emission. In this paper, we have investigated the effect of post-growth annealing on the variations in NBE and deep level (DL) emissions of as grown ZnO nanorods. These observed results have been explained on the basis of X-ray Diffraction (XRD) and Raman spectroscopic analysis, which clearly show that improved crystalinity and quantum confinement in ZnO nanorods.

Keywords: ZnO, nanorods, hydrothermal, KMnO4

Procedia PDF Downloads 389
4018 An Improved Image Steganography Technique Based on Least Significant Bit Insertion

Authors: Olaiya Folorunsho, Comfort Y. Daramola, Joel N. Ugwu, Lawrence B. Adewole, Olufisayo S. Ekundayo

Abstract:

In today world, there is a tremendous rise in the usage of internet due to the fact that almost all the communication and information sharing is done over the web. Conversely, there is a continuous growth of unauthorized access to confidential data. This has posed a challenge to information security expertise whose major goal is to curtail the menace. One of the approaches to secure the safety delivery of data/information to the rightful destination without any modification is steganography. Steganography is the art of hiding information inside an embedded information. This research paper aimed at designing a secured algorithm with the use of image steganographic technique that makes use of Least Significant Bit (LSB) algorithm for embedding the data into the bit map image (bmp) in order to enhance security and reliability. In the LSB approach, the basic idea is to replace the LSB of the pixels of the cover image with the Bits of the messages to be hidden without destroying the property of the cover image significantly. The system was implemented using C# programming language of Microsoft.NET framework. The performance evaluation of the proposed system was experimented by conducting a benchmarking test for analyzing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The result showed that image steganography performed considerably in securing data hiding and information transmission over the networks.

Keywords: steganography, image steganography, least significant bits, bit map image

Procedia PDF Downloads 255
4017 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal

Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert

Abstract:

We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.

Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving

Procedia PDF Downloads 357
4016 Convolutional Neural Networks versus Radiomic Analysis for Classification of Breast Mammogram

Authors: Mehwish Asghar

Abstract:

Breast Cancer (BC) is a common type of cancer among women. Its screening is usually performed using different imaging modalities such as magnetic resonance imaging, mammogram, X-ray, CT, etc. Among these modalities’ mammogram is considered a powerful tool for diagnosis and screening of breast cancer. Sophisticated machine learning approaches have shown promising results in complementing human diagnosis. Generally, machine learning methods can be divided into two major classes: one is Radiomics analysis (RA), where image features are extracted manually; and the other one is the concept of convolutional neural networks (CNN), in which the computer learns to recognize image features on its own. This research aims to improve the incidence of early detection, thus reducing the mortality rate caused by breast cancer through the latest advancements in computer science, in general, and machine learning, in particular. It has also been aimed to ease the burden of doctors by improving and automating the process of breast cancer detection. This research is related to a relative analysis of different techniques for the implementation of different models for detecting and classifying breast cancer. The main goal of this research is to provide a detailed view of results and performances between different techniques. The purpose of this paper is to explore the potential of a convolutional neural network (CNN) w.r.t feature extractor and as a classifier. Also, in this research, it has been aimed to add the module of Radiomics for comparison of its results with deep learning techniques.

Keywords: breast cancer (BC), machine learning (ML), convolutional neural network (CNN), radionics, magnetic resonance imaging, artificial intelligence

Procedia PDF Downloads 210
4015 Effect of an Interface Defect in a Patch/Layer Joint under Dynamic Time Harmonic Load

Authors: Elisaveta Kirilova, Wilfried Becker, Jordanka Ivanova, Tatyana Petrova

Abstract:

The study is a continuation of the research on the hygrothermal piezoelectric response of a smart patch/layer joint with undesirable interface defect (gap) at dynamic time harmonic mechanical and electrical load and environmental conditions. In order to find the axial displacements, shear stress and interface debond length in a closed analytical form for different positions of the interface gap, the 1D modified shear lag analysis is used. The debond length is represented as a function of many parameters (frequency, magnitude, electric displacement, moisture and temperature, joint geometry, position of the gap along the interface, etc.). Then the Genetic algorithm (GA) is implemented to find this position of the gap along the interface at which a vanishing/minimal debond length is ensured, e.g to find the most harmless position for the safe work of the structure. The illustrative example clearly shows that analytical shear-lag solutions and GA method can be combined successfully to give an effective prognosis of interface shear stress and interface delamination in patch/layer structure at combined loading with existing defects. To show the effect of the position of the interface gap, all obtained results are given in figures and discussed.

Keywords: genetic algorithm, minimal delamination, optimal gap position, shear lag solution

Procedia PDF Downloads 296