Search results for: light weight algorithm
9574 Assessing the Prevalence of Accidental Iatrogenic Paracetamol Overdose in Adult Hospital Patients Weighing <50kg: A Quality Improvement Project
Authors: Elisavet Arsenaki
Abstract:
Paracetamol overdose is associated with significant and possibly permanent consequences including hepatotoxicity, acute and chronic liver failure, and death. This quality improvement project explores the prevalence of accidental iatrogenic paracetamol overdose in hospital patients with a low body weight, defined as <50kg and assesses the impact of educational posters in trying to reduce it. The study included all adult inpatients on the admissions ward, a short stay ward for patients requiring 12-72 hour treatment, and consisted of three cycles. Each cycle consisted of 3 days of data collection in a given month (data collection for cycle 1 occurred in January 2022, February 2022 for cycle 2 and March 2022 for cycle 3). All patients given paracetamol had their prescribed dose checked against their charted weight to identify the percentage of adult inpatients <50kg who were prescribed 1g of paracetamol instead of 500mg. In the first cycle of the audit, data were collected from 83 patients who were prescribed paracetamol on the admissions ward. Subsequently, four A4 educational posters were displayed across the ward, on two separate occasions and with a one-month interval in between each poster display. The aim of this was to remind prescribing doctors of their responsibility to check patient body weight prior to prescribing paracetamol. Data were collected again one week after each round of poster display, from 72 and 70 patients respectively. Over the 3 cycles with a cumulative 225 patients, 15 weighed <50kg (6.67%) and of those, 5 were incorrectly prescribed 1g of paracetamol, yielding a 33.3% prevalence of accidental iatrogenic paracetamol overdose in adult inpatients. In cycle 1 of the project, 3 out of 6 adult patients weighing <50kg were overdosed on paracetamol, meaning that 50% of low weight patients were prescribed the wrong dose of paracetamol for their weight. In the second data collection cycle, 1 out of 5 <50kg patients were overdosed (20%) and in the third cycle, 1 out of 4 (25%). The use of educational posters resulted in a lower prevalence of accidental iatrogenic paracetamol overdose in low body weight adult inpatients. However, the differences observed were statistically insignificant (p value 0.993 and 0.995 respectively). Educational posters did not induce a significant decrease in the prevalence of accidental iatrogenic paracetamol overdose. More robust strategies need to be employed to further decrease paracetamol overdose in patients weighing <50kg.Keywords: iatrogenic, overdose, paracetamol, patient, safety
Procedia PDF Downloads 1149573 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction
Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour
Abstract:
In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift
Procedia PDF Downloads 3169572 Seed Yield and Quality of Late Planted Rabi Wheat Crop as Influenced by Basal and Foliar Application of Urea
Authors: Omvati Verma, Shyamashrre Roy
Abstract:
A field experiment was conducted with three basal nitrogen levels (90, 120 and 150 kg N/ha) and five foliar application of urea (absolute control, water spray, 3% urea spray at anthesis, 7 and 14 days after anthesis) at G.B. Pant University of Agriculture & Technology, Pantnagar, U.S. Nagar (Uttarakhand) during rabi season in a factorial randomized block design with three replications. Results revealed that nitrogen application of 150 kg/ha produced the highest seed yield, straw and biological yield and it was significantly superior to 90 kg N/ha and was at par with 120 kg N/ha. The number of tillers increased significantly with increase in nitrogen doses up to 150 kg N/ha. Spike length, number of grains per spike, grain weight per spike and thousand seed weight showed significantly higher values with 120 kg N/ha than 90 kg N/ha and were at par with that of 150 kg N/ha. Also, plant height showed similar trend. Leaf area index and chlorophyll content showed significant increase with an increase in nitrogen levels at different stages. In the case of foliar spray treatments, urea spray at anthesis showed highest value for yield and yield attributes. In case of spike length and thousand seed weight, it was similar with the urea spray at 7 and 14 days after anthesis, but for rest of the yield attributes, it was significantly higher than rest of the treatments. Among seed quality parameters protein and sedimentation value showed significant increase due to increase in nitrogen rates whereas, starch and hectolitre weight had a decreasing trend. Wet gluten content was not influenced by nitrogen levels. Foliar urea spray at anthesis resulted in highest value of protein and hectolitre weight whereas, urea spray at 7 days after anthesis showed highest value of sedimentation value and wet gluten content.Keywords: foliar application, nitrogenous fertilizer, seed quality, yield
Procedia PDF Downloads 2809571 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation
Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman
Abstract:
The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA
Procedia PDF Downloads 1539570 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 1099569 Comprehensive Evaluation of Thermal Environment and Its Countermeasures: A Case Study of Beijing
Authors: Yike Lamu, Jieyu Tang, Jialin Wu, Jianyun Huang
Abstract:
With the development of economy and science and technology, the urban heat island effect becomes more and more serious. Taking Beijing city as an example, this paper divides the value of each influence index of heat island intensity and establishes a mathematical model – neural network system based on the fuzzy comprehensive evaluation index of heat island effect. After data preprocessing, the algorithm of weight of each factor affecting heat island effect is generated, and the data of sex indexes affecting heat island intensity of Shenyang City and Shanghai City, Beijing, and Hangzhou City are input, and the result is automatically output by the neural network system. It is of practical significance to show the intensity of heat island effect by visual method, which is simple, intuitive and can be dynamically monitored.Keywords: heat island effect, neural network, comprehensive evaluation, visualization
Procedia PDF Downloads 1349568 Normal Weight Obesity among Female Students: BMI as a Non-Sufficient Tool for Obesity Assessment
Authors: Krzysztof Plesiewicz, Izabela Plesiewicz, Krzysztof Chiżyński, Marzenna Zielińska
Abstract:
Background: Obesity is an independent risk factor for cardiovascular diseases. There are several anthropometric parameters proposed to estimate the level of obesity, but until now there is no agreement which one is the best predictor of cardiometabolic risk. Scientists defined metabolically obese normal weight, who suffer from metabolic abnormalities, the same as obese individuals, and defined this syndrome as normal weight obesity (NWO). Aim of the study: The aim of our study was to determine the occurrence of overweight and obesity in a cohort of young, adult women, using standard and complementary methods of obesity assessment and to indicate those, who are at risk of obesity. The second aim of our study was to test additional methods of obesity assessment and proof that body mass index using alone is not sufficient parameter of obesity assessment. Materials and methods: 384 young women, aged 18-32, were enrolled into the study. Standard anthropometric parameters (waist to hips ratio (WTH), waist to height ratio (WTHR)) and two other methods of body fat percentage measurement (BFPM) were used in the study: electrical bioimpendance analysis (BIA) and skinfold measurement test by digital fat body mass clipper (SFM). Results: In the study group 5% and 7% of participants had waist to hips ratio and accordingly waist to height ratio values connected with visceral obesity. According to BMI 14% participants were overweight and obese. Using additional methods of body fat assessment, there were 54% and 43% of obese for BIA and SMF method. In the group of participants with normal BMI and underweight (not overweight, n =340) there were individuals with the level of BFPM above the upper limit, for the BIA 49% (n =164) and for the SFM 36 % (n=125). Statistical analysis revealed strong correlation between BIA and SFM methods. Conclusion: BMI using alone is not a sufficient parameter of obesity assessment. High percentage of young women with normal BMI values seem to be normal weight obese.Keywords: electrical bioimpedance, normal weight obesity, skin-fold measurement test, women
Procedia PDF Downloads 2759567 Uses and Manufacturing of Beech Corrugated Plywood
Authors: Prochazka Jiri, Beranek Tomas, Podlena Milan, Zeidler Ales
Abstract:
The poster deals with the issue of ISO shipping containers’ sheathing made of corrugated plywood instead of traditional corrugated metal sheets. It was found that the corrugated plywood is a suitable material for the sheathing due to its great flexural strength perpendicular to the course of the wave, sufficient impact resistance, surface compressive strength and low weight. Three sample sets of different thicknesses 5, 8 and 10 mm were tested in the experiments. The tests have shown that the 5 cm corrugated plywood is the most suitable thickness for sheathing. Experiments showed that to increase bending strength at needed value, it was necessary to increase the weight of the timber only by 1.6%. Flat cash test showed that 5 mm corrugated plywood is sufficient material for sheathing from a mechanical point of view. Angle of corrugation was found as a very important factor which massively affects the mechanical properties. The impact strength test has shown that plywood is relatively tough material in direction of corrugation. It was calculated that the use of corrugated plywood sheathing for the containers can reduce the weight of the walls up to 75%. Corrugated plywood is also suitable material for web of I-joists and wide interior design applications.Keywords: corrugated plywood, veneer, beech plywood, ISO shipping container, I-joist
Procedia PDF Downloads 3389566 Growth Pattern, Condition Factor and Relative Condition Factor of Twenty Important Demersal Marine Fish Species in Nigerian Coastal Water
Authors: Omogoriola Hannah Omoloye
Abstract:
Fish is a key ingredient on the global menu, a vital factor in the global environment and an important basis for livelihood worldwide1. The length – weight relationships (LWRs) is of great importance in fishery assessment2,3. Its importance is pronounced in estimated the average weight at a given length group4 and in assessing the relative well being of a fish population5. Length and weight measurement in conjunction with age data can give information on the stock composition, age at maturity, life span, mortality, growth and production4,5,6,7. In addition, the data on length and weight can also provides important clues to climatic and environmental changes and the change in human consumption practices8,9. However, the size attained by the individual fish may also vary because of variation in food supply, and these in turn may reflect variation in climatic parameters and in the supply of nutrient or in the degree of competition for food. Environment deterioration, for example, may reduce growth rates and will cause a decrease in the average age of the fish. The condition factor and the relative condition factor10 are the quantitative parameters of the well being state of the fish and reflect recent feeding condition of the fish. It is based on the hypothesis that heavier fish of a given length are in better condition11. This factor varies according to influences of physiological factors, fluctuating according to different stages of the development. Condition factor has been used as an index of growth and feeding intensity12. Condition factor decrease with increase in length 12,13 and also influences the reproductive cycle in fish14. The objective here is to determine the length-weight relationships and condition factor for direct use in fishery assessment and for future comparisons between populations of the same species at different locations. To provide quantitative information on the biology of marine fish species trawl from Nigeria coastal water.Keywords: condition factor, growth pattern, marine fish species, Nigerian Coastal water
Procedia PDF Downloads 4189565 Sorting Fish by Hu Moments
Authors: J. M. Hernández-Ontiveros, E. E. García-Guerrero, E. Inzunza-González, O. R. López-Bonilla
Abstract:
This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production.Keywords: counting fish, digital image processing, invariant moments, pattern recognition
Procedia PDF Downloads 4129564 Spatio-Temporal Data Mining with Association Rules for Lake Van
Authors: Tolga Aydin, M. Fatih Alaeddinoğlu
Abstract:
People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.Keywords: apriori algorithm, association rules, data mining, spatio-temporal data
Procedia PDF Downloads 3759563 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3439562 Robust Quantum Image Encryption Algorithm Leveraging 3D-BNM Chaotic Maps and Controlled Qubit-Level Operations
Authors: Vivek Verma, Sanjeev Kumar
Abstract:
This study presents a novel quantum image encryption algorithm, using a 3D chaotic map and controlled qubit-level scrambling operations. The newly proposed 3D-BNM chaotic map effectively reduces the degradation of chaotic dynamics resulting from the finite word length effect. It facilitates the generation of highly unpredictable random sequences and enhances chaotic performance. The system’s efficacy is additionally enhanced by the inclusion of a SHA-256 hash function. Initially, classical plain images are converted into their quantum equivalents using the Novel Enhanced Quantum Representation (NEQR) model. The Generalized Quantum Arnold Transformation (GQAT) is then applied to disrupt the coordinate information of the quantum image. Subsequently, to diffuse the pixel values of the scrambled image, XOR operations are performed using pseudorandom sequences generated by the 3D-BNM chaotic map. Furthermore, to enhance the randomness and reduce the correlation among the pixels in the resulting cipher image, a controlled qubit-level scrambling operation is employed. The encryption process utilizes fundamental quantum gates such as C-NOT and CCNOT. Both theoretical and numerical simulations validate the effectiveness of the proposed algorithm against various statistical and differential attacks. Moreover, the proposed encryption algorithm operates with low computational complexity.Keywords: 3D Chaotic map, SHA-256, quantum image encryption, Qubit level scrambling, NEQR
Procedia PDF Downloads 149561 Nitrogen/Platinum Co-Doped TiO₂ for Enhanced Visible Light Photocatalytic Degradation of Brilliant Black
Authors: Sarre Nzaba, Bulelwa Ntsendwana, Bekkie Mamba, Alex Kuvarega
Abstract:
Elimination of toxic organic compounds from wastewater is currently one of the most important subjects in water pollution control. The discharge of azo dyes such as Brilliant black (BB) into the water bodies has carcinogenic and mutagenic effects on humankind and the ecosystem. Conventional water treatment techniques fail to degrade these dyes completely thereby posing more problems. Advanced oxidation processes (AOPs) are promising technologies in solving the problem. Anatase type nitrogen-platinum (N,Pt) co-doped TiO₂ photocatalyts were prepared by a modified sol-gel method using amine terminated polyamidoamine generation 1 (PG1) as a template and source of nitrogen. SEM/ EDX, TEM, XRD, XPS, TGA, FTIR, RS, PL and UV-Vis were used to characterize the prepared nanomaterials. The synthesized photocatalysts exhibited lower band gap energies as compared to the commercial TiO₂ revealing a shift in band gap towards the visible light absorption region. Photocatalytic activity of N,Pt co-doped TiO₂ was measured by the reaction of photocatalytic degradation of BB dye. Enhanced photodegradation efficiency of BB was achieved after 180 min reaction time with initial concentration of 50 ppm BB solution. This was attributed to the rod-like shape of the materials, larger surface area, and enhanced absorption of visible light induced by N,Pt co-doping. The co-doped N,Pt also exhibited pseudo-first order kinetic behaviour with half-life and rate constant of 0.37 min 0.1984 min⁻¹ and respectively. N doped TiO₂ and N,Pt co-doped TiO₂ exhibited enhanced photocatalytic performances for the removal of BB from water.Keywords: N, Pt co-doped TiO₂, dendrimer, photodegradation, visible-light
Procedia PDF Downloads 1719560 Machine Learning-Enabled Classification of Climbing Using Small Data
Authors: Nicholas Milburn, Yu Liang, Dalei Wu
Abstract:
Athlete performance scoring within the climbing do-main presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.Keywords: classification, climbing, data imbalance, data scarcity, machine learning, time sequence
Procedia PDF Downloads 1449559 Effect of Variable Fluxes on Optimal Flux Distribution in a Metabolic Network
Authors: Ehsan Motamedian
Abstract:
Finding all optimal flux distributions of a metabolic model is an important challenge in systems biology. In this paper, a new algorithm is introduced to identify all alternate optimal solutions of a large scale metabolic network. The algorithm reduces the model to decrease computations for finding optimal solutions. The algorithm was implemented on the Escherichia coli metabolic model to find all optimal solutions for lactate and acetate production. There were more optimal flux distributions when acetate production was optimized. The model was reduced from 1076 to 80 variable fluxes for lactate while it was reduced to 91 variable fluxes for acetate. These 11 more variable fluxes resulted in about three times more optimal flux distributions. Variable fluxes were from 12 various metabolic pathways and most of them belonged to nucleotide salvage and extra cellular transport pathways.Keywords: flux variability, metabolic network, mixed-integer linear programming, multiple optimal solutions
Procedia PDF Downloads 4359558 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks
Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu
Abstract:
The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding
Procedia PDF Downloads 939557 iCCS: Development of a Mobile Web-Based Student Integrated Information System using Hill Climbing Algorithm
Authors: Maria Cecilia G. Cantos, Lorena W. Rabago, Bartolome T. Tanguilig III
Abstract:
This paper describes a conducive and structured information exchange environment for the students of the College of Computer Studies in Manuel S. Enverga University Foundation in. The system was developed to help the students to check their academic result, manage profile, make self-enlistment and assist the students to manage their academic status that can be viewed also in mobile phones. Developing class schedules in a traditional way is a long process that involves making many numbers of choices. With Hill Climbing Algorithm, however, the process of class scheduling, particularly with regards to courses to be taken by the student aligned with the curriculum, can perform these processes and end up with an optimum solution. The proponent used Rapid Application Development (RAD) for the system development method. The proponent also used the PHP as the programming language and MySQL as the database.Keywords: hill climbing algorithm, integrated system, mobile web-based, student information system
Procedia PDF Downloads 3859556 Comparative Study on Different Type of Shear Connectors in Composite Slabs
Authors: S. Subrmanian, A. Siva, R. Raghul
Abstract:
In modern construction industry, usage of cold form composite slab has its scope widely due to its light weight, high structural properties and economic factor. To enhance the structural integrity, mechanical interlocking or frictional interlocking was introduced. The role of mechanical interlocking or frictional interlocking is to increase the longitudinal shear between the profiled sheet and concrete. This paper deals with the experimental evaluation of three types of mechanical interlocking devices namely normal stud shear connector, J-Type shear connector, U-Type shear connector. An attempt was made to evolve the shear connector which can be suitable for the composite slab as an interlocking device. Totally six number of composite slabs have been experimented with three types of shear connectors and comparison study is made. The outcome was compared with numerical model was created by ABAQUS software and analyzed for comparative purpose. The result was U-Type shear connector provided better performance and resistance.Keywords: composite slabs, shear connector, end slip, longitudinal shear
Procedia PDF Downloads 3269555 Application of Artificial Neural Network and Background Subtraction for Determining Body Mass Index (BMI) in Android Devices Using Bluetooth
Authors: Neil Erick Q. Madariaga, Noel B. Linsangan
Abstract:
Body Mass Index (BMI) is one of the different ways to monitor the health of a person. It is based on the height and weight of the person. This study aims to compute for the BMI using an Android tablet by obtaining the height of the person by using a camera and measuring the weight of the person by using a weighing scale or load cell. The height of the person was estimated by applying background subtraction to the image captured and applying different processes such as getting the vanishing point and applying Artificial Neural Network. The weight was measured by using Wheatstone bridge load cell configuration and sending the value to the computer by using Gizduino microcontroller and Bluetooth technology after the amplification using AD620 instrumentation amplifier. The application will process the images and read the measured values and show the BMI of the person. The study met all the objectives needed and further studies will be needed to improve the design project.Keywords: body mass index, artificial neural network, vanishing point, bluetooth, wheatstone bridge load cell
Procedia PDF Downloads 3259554 Thermal Processing of Zn-Bi Layered Double Hydroxide ZnO Doped Bismuth for a Photo-Catalytic Efficiency under Light Visible
Authors: Benyamina Imane, Benalioua Bahia, Mansour Meriem, Bentouami Abdelhadi
Abstract:
The objective of this study is to use a synthetic route of the layered double hydroxide as a method of zinc oxide by doping a transition metal. The material is heat-treated at different temperatures then tested on the photo-fading of an acid dye indigo carmine under visible radiation compared with ZnO. The photo catalytic efficiency of Bi-ZnO in a visible light of 500 W was tested on photo-bleaching of an indigoid dye in comparison with the commercial ZnO. Indeed, a complete discoloration of indigo carmine solution of 16 mg / L was obtained after 40 and 120 minutes of irradiation in the presence of ZnO and ZnO-Bi respectively.Keywords: LDH, POA, photo-catalysis, Bi-ZnO doping
Procedia PDF Downloads 4539553 Computer Simulations of Stress Corrosion Studies of Quartz Particulate Reinforced ZA-27 Metal Matrix Composites
Authors: K. Vinutha
Abstract:
The stress corrosion resistance of ZA-27 / TiO2 metal matrix composites (MMC’s) in high temperature acidic media has been evaluated using an autoclave. The liquid melt metallurgy technique using vortex method was used to fabricate MMC’s. TiO2 particulates of 50-80 µm in size are added to the matrix. ZA-27 containing 2,4,6 weight percentage of TiO2 are prepared. Stress corrosion tests were conducted by weight loss method for different exposure time, normality and temperature of the acidic medium. The corrosion rates of composites were lower to that of matrix ZA-27 alloy under all conditions.Keywords: autoclave, MMC’s, stress corrosion, vortex method
Procedia PDF Downloads 4789552 Optimizing Boiler Combustion System in a Petrochemical Plant Using Neuro-Fuzzy Inference System and Genetic Algorithm
Authors: Yul Y. Nazaruddin, Anas Y. Widiaribowo, Satriyo Nugroho
Abstract:
Boiler is one of the critical unit in a petrochemical plant. Steam produced by the boiler is used for various processes in the plant such as urea and ammonia plant. An alternative method to optimize the boiler combustion system is presented in this paper. Adaptive Neuro-Fuzzy Inference System (ANFIS) approach is applied to model the boiler using real-time operational data collected from a boiler unit of the petrochemical plant. Nonlinear equation obtained is then used to optimize the air to fuel ratio using Genetic Algorithm, resulting an optimal ratio of 15.85. This optimal ratio is then maintained constant by ratio controller designed using inverse dynamics based on ANFIS. As a result, constant value of oxygen content in the flue gas is obtained which indicates more efficient combustion process.Keywords: ANFIS, boiler, combustion process, genetic algorithm, optimization.
Procedia PDF Downloads 2549551 A Parallel Algorithm for Solving the PFSP on the Grid
Authors: Samia Kouki
Abstract:
Solving NP-hard combinatorial optimization problems by exact search methods, such as Branch-and-Bound, may degenerate to complete enumeration. For that reason, exact approaches limit us to solve only small or moderate size problem instances, due to the exponential increase in CPU time when problem size increases. One of the most promising ways to reduce significantly the computational burden of sequential versions of Branch-and-Bound is to design parallel versions of these algorithms which employ several processors. This paper describes a parallel Branch-and-Bound algorithm called GALB for solving the classical permutation flowshop scheduling problem as well as its implementation on a Grid computing infrastructure. The experimental study of our distributed parallel algorithm gives promising results and shows clearly the benefit of the parallel paradigm to solve large-scale instances in moderate CPU time.Keywords: grid computing, permutation flow shop problem, branch and bound, load balancing
Procedia PDF Downloads 2839550 An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template
Authors: Zhu-Qing Jia, Tao Lin, Tong Zhou
Abstract:
The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons.Keywords: optical character recognition, fuzzy page identification, mutual correlation matrix, confidence self-adaptation
Procedia PDF Downloads 5409549 Aqueous Extract of Argemone Mexicana Roots for Effective Corrosion Inhibition of Mild Steel in HCl Environment
Authors: Gopal Ji, Priyanka Dwivedi, Shanthi Sundaram, Rajiv Prakash
Abstract:
Inhibition effect of aqueous Argemone Mexicana root extract (AMRE) on mild steel corrosion in 1 M HCl has been studied by weight loss, Tafel polarization curves, electrochemical impedance spectroscopy (EIS), scanning electron microscopy (SEM) and atomic force microscopy (AFM) techniques. Results indicate that inhibition ability of AMRE increases with the increasing amount of the extract. A maximum corrosion inhibition of 94% is acknowledged at the extract concentration of 400 mg L-1. Polarization curves and impedance spectra reveal that both cathodic and anodic reactions are suppressed due to passive layer formation at metal-acid interface. It is also confirmed by SEM micro graphs and FTIR studies. Furthermore, the effects of acid concentration (1-5 M), immersion time (120 hours) and temperature (30-60˚C) on inhibition potential of AMRE have been investigated by weight loss method and electrochemical techniques. Adsorption mechanism is also proposed on the basis of weight loss results, which shows good agreement with Langmuir isotherm.Keywords: mild steel, polarization, SEM, acid corrosion, EIS, green inhibition
Procedia PDF Downloads 4949548 A Proposed Algorithm for Obtaining the Map of Subscribers’ Density Distribution for a Mobile Wireless Communication Network
Authors: C. Temaneh-Nyah, F. A. Phiri, D. Karegeya
Abstract:
This paper presents an algorithm for obtaining the map of subscriber’s density distribution for a mobile wireless communication network based on the actual subscriber's traffic data obtained from the base station. This is useful in statistical characterization of the mobile wireless network.Keywords: electromagnetic compatibility, statistical analysis, simulation of communication network, subscriber density
Procedia PDF Downloads 3109547 Evaluation of the Inhibitive Effect of Novel Quinoline Schiff Base on Corrosion of Mild Steel in HCl Solution
Authors: Smita Jauhari, Bhupendra Mistry
Abstract:
Schiff base (E)-2-methyl-N-(tetrazolo[1,5-a]quinolin-4-ylmethylene)aniline (QMA) was synthesized, and its inhibitive effect for mild steel in 1M HCl solution was investigated by weight loss measurement and electrochemical tests.From the weight loss measurements and electrochemical tests, it was observed that the inhibition efficiency increases with the increase in the Schiff base concentration and reaches a maximum at the optimum concentration. This is further confirmed by the decrease in corrosion rate. It is found that the system follows Langmuir adsorption isotherm.Keywords: Schiff base, acid corrosion, electrochemical impedance spectroscopy, polarization
Procedia PDF Downloads 3679546 Development of a Practical Screening Measure for the Prediction of Low Birth Weight and Neonatal Mortality in Upper Egypt
Authors: Prof. Ammal Mokhtar Metwally, Samia M. Sami, Nihad A. Ibrahim, Fatma A. Shaaban, Iman I. Salama
Abstract:
Objectives: Reducing neonatal mortality by 2030 is still a challenging goal in developing countries. low birth weight (LBW) is a significant contributor to this, especially where weighing newborns is not possible routinely. The present study aimed to determine a simple, easy, reliable anthropometric measure(s) that can predict LBW) and neonatal mortality. Methods: A prospective cohort study of 570 babies born in districts of El Menia governorate, Egypt (where most deliveries occurred at home) was examined at birth. Newborn weight, length, head, chest, mid-arm, and thigh circumferences were measured. Follow up of the examined neonates took place during their first four weeks of life to report any mortalities. The most predictable anthropometric measures were determined using the statistical package of SPSS, and multiple Logistic regression analysis was performed.: Results: Head and chest circumferences with cut-off points < 33 cm and ≤ 31.5 cm, respectively, were the significant predictors for LBW. They carried the best combination of having the highest sensitivity (89.8 % & 86.4 %) and least false negative predictive value (1.4 % & 1.7 %). Chest circumference with a cut-off point ≤ 31.5 cm was the significant predictor for neonatal mortality with 83.3 % sensitivity and 0.43 % false negative predictive value. Conclusion: Using chest circumference with a cut-off point ≤ 31.5 cm is recommended as a single simple anthropometric measurement for the prediction of both LBW and neonatal mortality. The predicted measure could act as a substitute for weighting newborns in communities where scales to weigh them are not routinely available.Keywords: low birth weight, neonatal mortality, anthropometric measures, practical screening
Procedia PDF Downloads 1019545 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 574