Search results for: subspace rotation algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4134

Search results for: subspace rotation algorithm

984 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 262
983 Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography

Authors: Jorge Martinez-Garcia, Ingrid Stelzner, Joerg Stelzner, Damian Gwerder, Philipp Schuetz

Abstract:

Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator’s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species.

Keywords: ring recognition, edge detection, X-ray computed tomography, dendrochronology

Procedia PDF Downloads 220
982 Attendance Management System Implementation Using Face Recognition

Authors: Zainab S. Abdullahi, Zakariyya H. Abdullahi, Sahnun Dahiru

Abstract:

Student attendance in schools is a very important aspect in school management record. In recent years, security systems have become one of the most demanding systems in school. Every institute have its own method of taking attendance, many schools in Nigeria use the old fashion way of taking attendance. That is writing the students name and registration number in a paper and submitting it to the lecturer at the end of the lecture which is time-consuming and insecure, because some students can write for their friends without the lecturer’s knowledge. In this paper, we propose a system that takes attendance using face recognition. There are many automatic methods available for this purpose i.e. biometric attendance, but they all waste time, because the students have to follow a queue to put their thumbs on a scanner which is time-consuming. This attendance is recorded by using a camera attached in front of the class room and capturing the student images, detect the faces in the image and compare the detected faces with database and mark the attendance. The principle component analysis was used to recognize the faces detected with a high accuracy rate. The paper reviews the related work in the field of attendance system, then describe the system architecture, software algorithm and result.

Keywords: attendance system, face detection, face recognition, PCA

Procedia PDF Downloads 364
981 Saline Aspiration Negative Intravascular Test: Mitigating Risk with Injectable Fillers

Authors: Marcelo Lopes Dias Kolling, Felipe Ferreira Laranjeira, Guilherme Augusto Hettwer, Pedro Salomão Piccinini, Marwan Masri, Carlos Oscar Uebel

Abstract:

Introduction: Injectable fillers are among the most common nonsurgical cosmetic procedures, with significant growth yearly. Knowledge of rheological and mechanical characteristics of fillers, facial anatomy, and injection technique is essential for safety. Concepts such as the use of cannula versus needle, aspiration before injection, and facial danger zones have been well discussed. In case of an accidental intravascular puncture, the pressure inside the vessel may not be sufficient to push blood into the syringe due to the characteristics of the filler product; this is especially true for calcium hydroxyapatite (CaHA) or hyaluronic acid (HA) fillers with high G’. Since viscoelastic properties of normal saline are much lower than those of fillers, aspiration with saline prior to filler injection may decrease the risk of a false negative aspiration and subsequent catastrophic effects. We discuss a technique to add an additional safety step to the procedure with saline aspiration prior to injection, a ‘’reverse Seldinger’’ technique for intravascular access, which we term SANIT: Saline Aspiration Negative Intravascular Test. Objectives: To demonstrate the author’s (PSP) technique which adds an additional safety step to the process of filler injection, with both CaHA and HA, in order to decrease the risk of intravascular injection. Materials and Methods: Normal skin cleansing and topical anesthesia with prilocaine/lidocaine cream are performed; the facial subunits to be treated are marked. A 3mL Luer lock syringe is filled with 2mL of 0.9% normal saline and a 27G needle, which is turned one half rotation. When a cannula is to be used, the Luer lock syringe is attached to a 27G 4cm single hole disposable cannula. After skin puncture, the 3mL syringe is advanced with the plunger pulled back (negative pressure). Progress is made to the desired depth, all the while aspirating. Once the desired location of filler injection is reached, the syringe is exchanged for the syringe containing a filler, securely grabbing the hub of the needle and taking care to not dislodge the needle tip. Prior to this, we remove 0.1mL of filler to allow for space inside the syringe for aspiration. We again aspirate and inject retrograde. SANIT is especially useful for CaHA, since the G’ is much higher than HA, and thus reflux of blood into the syringe is less likely to occur. Results: The technique has been used safely for the past two years with no adverse events; the increase in cost is negligible (only the cost of 2mL of normal saline). Over 100 patients (over 300 syringes) have been treated with this technique. The risk of accidental intravascular puncture has been calculated to be between 1:6410 to 1:40882 syringes among expert injectors; however, the consequences of intravascular injection can be catastrophic even with board-certified physicians. Conclusions: While the risk of intravascular filler injection is low, the consequences can be disastrous. We believe that adding the SANIT technique can help further mitigate risk with no significant untoward effects and could be considered by all performing injectable fillers. Further follow-up is ongoing.

Keywords: injectable fillers, safety, saline aspiration, injectable filler complications, hyaluronic acid, calcium hydroxyapatite

Procedia PDF Downloads 150
980 Evaluation and Fault Classification for Healthcare Robot during Sit-To-Stand Performance through Center of Pressure

Authors: Tianyi Wang, Hieyong Jeong, An Guo, Yuko Ohno

Abstract:

Healthcare robot for assisting sit-to-stand (STS) performance had aroused numerous research interests. To author’s best knowledge, knowledge about how evaluating healthcare robot is still unknown. Robot should be labeled as fault if users feel demanding during STS when they are assisted by robot. In this research, we aim to propose a method to evaluate sit-to-stand assist robot through center of pressure (CoP), then classify different STS performance. Experiments were executed five times with ten healthy subjects under four conditions: two self-performed STSs with chair heights of 62 cm and 43 cm, and two robot-assisted STSs with chair heights of 43 cm and robot end-effect speed of 2 s and 5 s. CoP was measured using a Wii Balance Board (WBB). Bayesian classification was utilized to classify STS performance. The results showed that faults occurred when decreased the chair height and slowed robot assist speed. Proposed method for fault classification showed high probability of classifying fault classes form others. It was concluded that faults for STS assist robot could be detected by inspecting center of pressure and be classified through proposed classification algorithm.

Keywords: center of pressure, fault classification, healthcare robot, sit-to-stand movement

Procedia PDF Downloads 197
979 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 479
978 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 203
977 Vibration-Based Data-Driven Model for Road Health Monitoring

Authors: Guru Prakash, Revanth Dugalam

Abstract:

A road’s condition often deteriorates due to harsh loading such as overload due to trucks, and severe environmental conditions such as heavy rain, snow load, and cyclic loading. In absence of proper maintenance planning, this results in potholes, wide cracks, bumps, and increased roughness of roads. In this paper, a data-driven model will be developed to detect these damages using vibration and image signals. The key idea of the proposed methodology is that the road anomaly manifests in these signals, which can be detected by training a machine learning algorithm. The use of various machine learning techniques such as the support vector machine and Radom Forest method will be investigated. The proposed model will first be trained and tested with artificially simulated data, and the model architecture will be finalized by comparing the accuracies of various models. Once a model is fixed, the field study will be performed, and data will be collected. The field data will be used to validate the proposed model and to predict the future road’s health condition. The proposed will help to automate the road condition monitoring process, repair cost estimation, and maintenance planning process.

Keywords: SVM, data-driven, road health monitoring, pot-hole

Procedia PDF Downloads 86
976 Design of Geochemical Maps of Industrial City Using Gradient Boosting and Geographic Information System

Authors: Ruslan Safarov, Zhanat Shomanova, Yuri Nossenko, Zhandos Mussayev, Ayana Baltabek

Abstract:

Geochemical maps of distribution of polluting elements V, Cr, Mn, Co, Ni, Cu, Zn, Mo, Cd, Pb on the territory of the Pavlodar city (Kazakhstan), which is an industrial hub were designed. The samples of soil were taken from 100 locations. Elemental analysis has been performed using XRF. The obtained data was used for training of the computational model with gradient boosting algorithm. The optimal parameters of model as well as the loss function were selected. The computational model was used for prediction of polluting elements concentration for 1000 evenly distributed points. Based on predicted data geochemical maps were created. Additionally, the total pollution index Zc was calculated for every from 1000 point. The spatial distribution of the Zc index was visualized using GIS (QGIS). It was calculated that the maximum coverage area of the territory of the Pavlodar city belongs to the moderately hazardous category (89.7%). The visualization of the obtained data allowed us to conclude that the main source of contamination goes from the industrial zones where the strategic metallurgical and refining plants are placed.

Keywords: Pavlodar, geochemical map, gradient boosting, CatBoost, QGIS, spatial distribution, heavy metals

Procedia PDF Downloads 82
975 The Effect of Students’ Social and Scholastic Background and Environmental Impact on Shaping Their Pattern of Digital Learning in Academia: A Pre- and Post-COVID Comparative View

Authors: Nitza Davidovitch, Yael Yossel-Eisenbach

Abstract:

The purpose of the study was to inquire whether there was a change in the shaping of undergraduate students’ digitally-oriented study pattern in the pre-Covid (2016-2017) versus post-Covid period (2022-2023), as affected by three factors: social background characteristics, high school, and academic background characteristics. These two-time points were cauterized by dramatic changes in teaching and learning at institutions of higher education. The data were collected via cross-sectional surveys at two-time points, in the 2016-2017 academic school year (N=443) and in the 2022-2023 school year (N=326). The questionnaire was distributed on social media and it includes questions on demographic background characteristics, previous studies in high school and present academic studies, and questions on learning and reading habits. Method of analysis: A. Statistical descriptive analysis, B. Mean comparison tests were conducted to analyze the variations in the mean score for the digitally-oriented learning pattern variable at two-time points (pre- and post-Covid) in relation to each of the independent variables. C. Analysis of variance was performed to test the main effects and the interactions. D. Applying linear regression, the research aimed to examine the combined effect of the independent variables on shaping students' digitally-oriented learning habits. The analysis includes four models. In all four models, the dependent variable is students’ perception of digitally oriented learning. The first model included social background variables; the second model included scholastic background as well. In the third model, the academic background variables were added, and the fourth model includes all the independent variables together with the variable of period (pre- and post-COVID). E. Factor analysis confirms using the principal component method with varimax rotation; the variables were constructed by a weighted mean of all the relevant statements merged to form a single variable denoting a shared content world. The research findings indicate a significant rise in students’ perceptions of digitally-oriented learning in the post-COVID period. From a gender perspective, the impact of COVID on shaping a digital learning pattern was much more significant for female students. The socioeconomic status perspective is eliminated when controlling for the period, and the student’s job is affected - more than all other variables. It may be assumed that the student’s work pattern mediates effects related to the convenience offered by digital learning regarding distance and time. The significant effect of scholastic background on shaping students’ digital learning patterns remained stable, even when controlling for all explanatory variables. The advantage that universities had over colleges in shaping a digital learning pattern in the pre-COVID period dissipated. Therefore, it can be said that after COVID, there was a change in how colleges shape students’ digital learning patterns in such a way that no institutional differences are evident with regard to shaping the digital learning pattern. The study shows that period has a significant independent effect on shaping students’ digital learning patterns when controlling for the explanatory variables.

Keywords: learning pattern, COVID, socioeconomic status, digital learning

Procedia PDF Downloads 62
974 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application

Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid

Abstract:

Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.

Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization

Procedia PDF Downloads 405
973 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 73
972 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia PDF Downloads 527
971 Analysis of Three-Dimensional Longitudinal Rolls Induced by Double Diffusive Poiseuille-Rayleigh-Benard Flows in Rectangular Channels

Authors: O. Rahli, N. Mimouni, R. Bennacer, K. Bouhadef

Abstract:

This numerical study investigates the travelling wave’s appearance and the behavior of Poiseuille-Rayleigh-Benard (PRB) flow induced in 3D thermosolutale mixed convection (TSMC) in horizontal rectangular channels. The governing equations are discretized by using a control volume method with third order Quick scheme in approximating the advection terms. Simpler algorithm is used to handle coupling between the momentum and continuity equations. To avoid the excessively high computer time, full approximation storage (FAS) with full multigrid (FMG) method is used to solve the problem. For a broad range of dimensionless controlling parameters, the contribution of this work is to analyzing the flow regimes of the steady longitudinal thermoconvective rolls (noted R//) for both thermal and mass transfer (TSMC). The transition from the opposed volume forces to cooperating ones, considerably affects the birth and the development of the longitudinal rolls. The heat and mass transfers distribution are also examined.

Keywords: heat and mass transfer, mixed convection, poiseuille-rayleigh-benard flow, rectangular duct

Procedia PDF Downloads 298
970 A Brave New World of Privacy: Empirical Insights into the Metaverse’s Personalization Dynamics

Authors: Cheng Xu

Abstract:

As the metaverse emerges as a dynamic virtual simulacrum of reality, its implications on user privacy have become a focal point of interest. While previous discussions have ventured into metaverse privacy dynamics, a glaring empirical gap persists, especially concerning the effects of personalization in the context of news recommendation services. This study stands at the forefront of addressing this void, meticulously examining how users' privacy concerns shift within the metaverse's personalization context. Through a pre-registered randomized controlled experiment, participants engaged in a personalization task across both the metaverse and traditional online platforms. Upon completion of this task, a comprehensive news recommendation service provider offers personalized news recommendations to the users. Our empirical findings reveal that the metaverse inherently amplifies privacy concerns compared to traditional settings. However, these concerns are notably mitigated when users have a say in shaping the algorithms that drive these recommendations. This pioneering research not only fills a significant knowledge gap but also offers crucial insights for metaverse developers and policymakers, emphasizing the nuanced role of user input in shaping algorithm-driven privacy perceptions.

Keywords: metaverse, privacy concerns, personalization, digital interaction, algorithmic recommendations

Procedia PDF Downloads 117
969 A Particle Swarm Optimal Control Method for DC Motor by Considering Energy Consumption

Authors: Yingjie Zhang, Ming Li, Ying Zhang, Jing Zhang, Zuolei Hu

Abstract:

In the actual start-up process of DC motors, the DC drive system often faces a conflict between energy consumption and acceleration performance. To resolve the conflict, this paper proposes a comprehensive performance index that energy consumption index is added on the basis of classical control performance index in the DC motor starting process. Taking the comprehensive performance index as the cost function, particle swarm optimization algorithm is designed to optimize the comprehensive performance. Then it conducts simulations on the optimization of the comprehensive performance of the DC motor on condition that the weight coefficient of the energy consumption index should be properly designed. The simulation results show that as the weight of energy consumption increased, the energy efficiency was significantly improved at the expense of a slight sacrifice of fastness indicators with the comprehensive performance index method. The energy efficiency was increased from 63.18% to 68.48% and the response time reduced from 0.2875s to 0.1736s simultaneously compared with traditional proportion integrals differential controller in energy saving.

Keywords: comprehensive performance index, energy consumption, acceleration performance, particle swarm optimal control

Procedia PDF Downloads 163
968 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study

Authors: Si Mon Kueh, Tom J. Kazmierski

Abstract:

There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.

Keywords: Artificial Neural Networks (ANN), bit-serial neural processor, FPGA, Neural Processing Element (NPE)

Procedia PDF Downloads 321
967 Eco-Friendly Cultivation

Authors: Shah Rucksana Akhter Urme

Abstract:

Agriculture is the main source of food for human consumption and feeding the world huge population, the pressure of food supply is increasing day by day. Undoubtedly, quality strain, improved plantation, farming technology, synthetic fertilizer, readily available irrigation, insecticides and harvesting technology are the main factors those to meet up the huge demand of food consumption all over the world. However, depended on this limited resources and excess amount of consuming lands, water, fertilizers leads to the end of the resources and severe climate effects has been left for our future generation. Agriculture is the most responsible to global warming, emitting more greenhouse gases than all other vehicles largely from nitrous oxide released by from fertilized fields, and carbon dioxide from the cutting of rain forests to grow crops . Farming is the thirstiest user of our precious water supplies and a major polluter, as runoff from fertilizers disrupts fragile lakes, rivers, and coastal ecosystems across the globe which accelerates the loss of biodiversity, crucial habitat and a major driver of wildlife extinction. It is needless to say that we have to more concern on how we can save the nutrients of the soil, storage of the water and avoid excessive depends on synthetic fertilizer and insecticides. In this case, eco- friendly cultivation could be a potential alternative solution to minimize effects of agriculture in our environment. The objective of this review paper is about organic cultivation following in particular biotechnological process focused on bio-fertilizer and bio-pesticides. Intense practice of chemical pesticides, insecticides has severe effect on both in human life and biodiversity. This cultivation process introduces farmer an alternative way which is nonhazardous, cost effective and ecofriendly. Organic fertilizer such as tea residue, ashes might be the best alternative to synthetic fertilizer those play important role in increasing soil nutrient and fertility. Ashes contain different essential and non-essential mineral contents that are required for plant growth. Organic pesticide such as neem spray is beneficial for crop as it is toxic for pest and insects. Recycled and composted crop wastes and animal manures, crop rotation, green manures and legumes etc. are suitable for soil fertility which is free from hazardous chemicals practice. Finally water hyacinth and algae are potential source of nutrients even alternative to soil for cultivation along with storage of water for continuous supply. Inorganic practice of agriculture, consuming fruits and vegetables becomes a threat for both human life and eco-system and synthetic fertilizer and pesticides are responsible for it. Farmers that practice eco-friendly farming have to implement steps to protect the environment, particularly by severely limiting the use of pesticides and avoiding the use of synthetic chemical fertilizers, which are necessary for organic systems to experience reduced environmental harm and health risk.

Keywords: organic farming, biopesticides, organic nutrients, water storage, global warming

Procedia PDF Downloads 60
966 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 200
965 Non-Invasive Imaging of Tissue Using Near Infrared Radiations

Authors: Ashwani Kumar Aggarwal

Abstract:

NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.

Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering

Procedia PDF Downloads 316
964 Periodical System of Isotopes

Authors: Andriy Magula

Abstract:

With the help of a special algorithm being the principle of multilevel periodicity, the periodic change of properties at the nuclear level of chemical elements was discovered and the variant for the periodic system of isotopes was presented. The periodic change in the properties of isotopes, as well as the vertical symmetry of subgroups, was checked for consistency in accordance with the following ten types of experimental data: mass ratio of fission fragments; quadrupole moment values; magnetic moment; lifetime of radioactive isotopes; neutron scattering; thermal neutron radiative capture cross-sections (n, γ); α-particle yield cross-sections (n, α); isotope abundance on Earth, in the Solar system and other stellar systems; features of ore formation and stellar evolution. For all ten cases, the correspondences for the proposed periodic structure of the nucleus were obtained. The system was formed in the usual 2D table, similar to the periodic system of elements, and the mass series of isotopes was divided into 8 periods and 4 types of ‘nuclear’ orbitals: sn, dn, pn, fn. The origin of ‘magic’ numbers as a set of filled charge shells of the nucleus was explained. Due to the isotope system, the periodic structure is shown at a new level of the universe, and the prospects of its practical use are opened up.

Keywords: periodic system, isotope, period, subgroup, “nuclear” orbital, nuclear reaction

Procedia PDF Downloads 17
963 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 321
962 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: queueing network, discrete-event simulation, health applications, SPT

Procedia PDF Downloads 187
961 Method of False Alarm Rate Control for Cyclic Redundancy Check-Aided List Decoding of Polar Codes

Authors: Dmitry Dikarev, Ajit Nimbalker, Alexei Davydov

Abstract:

Polar coding is a novel example of error correcting codes, which can achieve Shannon limit at block length N→∞ with log-linear complexity. Active research is being carried to adopt this theoretical concept for using in practical applications such as 5th generation wireless communication systems. Cyclic redundancy check (CRC) error detection code is broadly used in conjunction with successive cancellation list (SCL) decoding algorithm to improve finite-length polar code performance. However, there are two issues: increase of code block payload overhead by CRC bits and decrease of CRC error-detection capability. This paper proposes a method to control CRC overhead and false alarm rate of polar decoding. As shown in the computer simulations results, the proposed method provides the ability to use any set of CRC polynomials with any list size while maintaining the desired level of false alarm rate. This level of flexibility allows using polar codes in 5G New Radio standard.

Keywords: 5G New Radio, channel coding, cyclic redundancy check, list decoding, polar codes

Procedia PDF Downloads 238
960 Hate Speech Detection Using Machine Learning: A Survey

Authors: Edemealem Desalegn Kingawa, Kafte Tasew Timkete, Mekashaw Girmaw Abebe, Terefe Feyisa, Abiyot Bitew Mihretie, Senait Teklemarkos Haile

Abstract:

Currently, hate speech is a growing challenge for society, individuals, policymakers, and researchers, as social media platforms make it easy to anonymously create and grow online friends and followers and provide an online forum for debate about specific issues of community life, culture, politics, and others. Despite this, research on identifying and detecting hate speech is not satisfactory performance, and this is why future research on this issue is constantly called for. This paper provides a systematic review of the literature in this field, with a focus on approaches like word embedding techniques, machine learning, deep learning technologies, hate speech terminology, and other state-of-the-art technologies with challenges. In this paper, we have made a systematic review of the last six years of literature from Research Gate and Google Scholar. Furthermore, limitations, along with algorithm selection and use challenges, data collection, and cleaning challenges, and future research directions, are discussed in detail.

Keywords: Amharic hate speech, deep learning approach, hate speech detection review, Afaan Oromo hate speech detection

Procedia PDF Downloads 177
959 Study on Errors in Estimating the 3D Gaze Point for Different Pupil Sizes Using Eye Vergences

Authors: M. Pomianek, M. Piszczek, M. Maciejewski

Abstract:

The binocular eye tracking technology is increasingly being used in industry, entertainment and marketing analysis. In the case of virtual reality, eye tracking systems are already the basis for user interaction with the environment. In such systems, the high accuracy of determining the user's eye fixation point is very important due to the specificity of the virtual reality head-mounted display (HMD). Often, however, there are unknown errors occurring in the used eye tracking technology, as well as those resulting from the positioning of the devices in relation to the user's eyes. However, can the virtual environment itself influence estimation errors? The paper presents mathematical analyses and empirical studies of the determination of the fixation point and errors resulting from the change in the size of the pupil in response to the intensity of the displayed scene. The article contains both static laboratory tests as well as on the real user. Based on the research results, optimization solutions were proposed that would reduce the errors of gaze estimation errors. Studies show that errors in estimating the fixation point of vision can be minimized both by improving the pupil positioning algorithm in the video image and by using more precise methods to calibrate the eye tracking system in three-dimensional space.

Keywords: eye tracking, fixation point, pupil size, virtual reality

Procedia PDF Downloads 132
958 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: intelligent monitoring, ultra-wideband technology, real-time location, IoT devices, smart healthcare

Procedia PDF Downloads 140
957 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction

Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong

Abstract:

The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.

Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm

Procedia PDF Downloads 149
956 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad

Abstract:

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Keywords: reliability, availability Gumbel-Hougaard family copula, MTTF, internet data centre

Procedia PDF Downloads 530
955 Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization

Authors: Marcell Serra de Almeida Martins, Benedito de Souza Ribeiro Neto, Gerson Lima Serejo, Carlos Gustavo Resque Dos Santos

Abstract:

Indoor positioning systems use sensors such as Bluetooth, ZigBee, and Wi-Fi, as well as cameras for image capture, which can be fixed or mobile. These computer vision-based positioning approaches are low-cost to implement, mainly when it uses a mobile camera. The present study aims to create a design of a fiducial marker for a low-cost indoor localization system. The marker is tape-shaped to perform a continuous reading employing two detection algorithms, one for greater distances and another for smaller distances. Therefore, the location service is always operational, even with variations in capture distance. A minimal localization and reading algorithm were implemented for the proposed marker design, aiming to validate it. The accuracy tests consider readings varying the capture distance between [0.5, 10] meters, comparing the proposed marker with others. The tests showed that the proposed marker has a broader capture range than the ArUco and QRCode, maintaining the same size. Therefore, reducing the visual pollution and maximizing the tracking since the ambient can be covered entirely.

Keywords: multiscale recognition, indoor localization, tape-shaped marker, fiducial marker

Procedia PDF Downloads 134