Search results for: minimum root mean square (RMS) error matching algorithm
7296 Unmanned Aerial Vehicle Landing Based on Ultra-Wideband Localization System and Optimal Strategy for Searching Optimal Landing Point
Authors: Meng Wu
Abstract:
Unmanned aerial vehicle (UAV) landing technology is a common task that is required to be fulfilled by fly robots. In this paper, the crazyflie2.0 is located by ultra-wideband (UWB) localization system that contains 4 UWB anchors. Another UWB anchor is introduced and installed on a stationary platform. One cost function is designed to find the minimum distance between crazyflie2.0 and the anchor installed on the stationary platform. The coordinates of the anchor are unknown in advance, and the goal of the cost function is to define the location of the anchor, which can be considered as an optimal landing point. When the cost function reaches the minimum value, the corresponding coordinates of the UWB anchor fixed on the stationary platform can be calculated and defined as the landing point. The simulation shows the effectiveness of the method in this paper.Keywords: UAV landing, UWB localization system, UWB anchor, cost function, stationary platform
Procedia PDF Downloads 877295 Ultra-Reliable Low Latency V2X Communication for Express Way Using Multiuser Scheduling Algorithm
Authors: Vaishali D. Khairnar
Abstract:
The main aim is to provide lower-latency and highly reliable communication facilities for vehicles in the automobile industry; vehicle-to-everything (V2X) communication basically intends to increase expressway road security and its effectiveness. The Ultra-Reliable Low-Latency Communications (URLLC) algorithm and cellular networks are applied in combination with Mobile Broadband (MBB). This is particularly used in express way safety-based driving applications. Expressway vehicle drivers (humans) will communicate in V2X systems using the sixth-generation (6G) communication systems which have very high-speed mobility features. As a result, we need to determine how to ensure reliable and consistent wireless communication links and improve the quality to increase channel gain, which is becoming a challenge that needs to be addressed. To overcome this challenge, we proposed a unique multi-user scheduling algorithm for ultra-massive multiple-input multiple-output (MIMO) systems using 6G. In wideband wireless network access in case of high traffic and also in medium traffic conditions, moreover offering quality-of-service (QoS) to distinct service groups with synchronized contemporaneous traffic on the highway like the Mumbai-Pune expressway becomes a critical problem. Opportunist MAC (OMAC) is a way of proposing communication across a wireless communication link that can change in space and time and might overcome the above-mentioned challenge. Therefore, a multi-user scheduling algorithm is proposed for MIMO systems using a cross-layered MAC protocol to achieve URLLC and high reliability in V2X communication.Keywords: ultra-reliable low latency communications, vehicle-to-everything communication, multiple-input multiple-output systems, multi-user scheduling algorithm
Procedia PDF Downloads 887294 Frequency of Consonant Production Errors in Children with Speech Sound Disorder: A Retrospective-Descriptive Study
Authors: Amulya P. Rao, Prathima S., Sreedevi N.
Abstract:
Speech sound disorders (SSD) encompass the major concern in younger population of India with highest prevalence rate among the speech disorders. Children with SSD if not identified and rehabilitated at the earliest, are at risk for academic difficulties. This necessitates early identification using screening tools assessing the frequently misarticulated speech sounds. The literature on frequently misarticulated speech sounds is ample in English and other western languages targeting individuals with various communication disorders. Articulation is language specific, and there are limited studies reporting the same in Kannada, a Dravidian Language. Hence, the present study aimed to identify the frequently misarticulated consonants in Kannada and also to examine the error type. A retrospective, descriptive study was carried out using secondary data analysis of 41 participants (34-phonetic type and 7-phonemic type) with SSD in the age range 3-to 12-years. All the consonants of Kannada were analyzed by considering three words for each speech sound from the Kannada Diagnostic Photo Articulation test (KDPAT). Picture naming task was carried out, and responses were audio recorded. The recorded data were transcribed using IPA 2018 broad transcription. A criterion of 2/3 or 3/3 error productions was set to consider the speech sound to be an error. Number of error productions was calculated for each consonant in each participant. Then, the percentage of participants meeting the criteria were documented for each consonant to identify the frequently misarticulated speech sound. Overall results indicated that velar /k/ (48.78%) and /g/ (43.90%) were frequently misarticulated followed by voiced retroflex /ɖ/ (36.58%) and trill /r/ (36.58%). The lateral retroflex /ɭ/ was misarticulated by 31.70% of the children with SSD. Dentals (/t/, /n/), bilabials (/p/, /b/, /m/) and labiodental /v/ were produced correctly by all the participants. The highly misarticulated velars /k/ and /g/ were frequently substituted by dentals /t/ and /d/ respectively or omitted. Participants with SSD-phonemic type had multiple substitutions for one speech sound whereas, SSD-phonetic type had consistent single sound substitutions. Intra- and inter-judge reliability for 10% of the data using Cronbach’s Alpha revealed good reliability (0.8 ≤ α < 0.9). Analyzing a larger sample by replicating such studies will validate the present study results.Keywords: consonant, frequently misarticulated, Kannada, SSD
Procedia PDF Downloads 1357293 Effect of Tree Age on Fruit Quality of Different Cultivars of Sweet Orange
Authors: Muhammad Imran, Faheem Khadija, Zahoor Hussain, Raheel Anwar, M. Nawaz Khan, M. Raza Salik
Abstract:
Amongst citrus species, sweet orange (Citrus sinensis L. Osbeck) occupies a dominant position in the orange producing countries in the world. Sweet orange is widely consumed both as fresh fruit as well as juice and its global demand is attributed due to higher vitamin C and antioxidants. Fruit quality is most important for the external appearance and marketability of sweet orange fruit, especially for fresh consumption. There are so many factors affecting fruit quality, tree age is the most important one, but remains unexplored so far. The present study, we investigated the role of tree age on fruit quality of different cultivars of sweet oranges. The difference between fruit quality of 5-year young and 15-year old trees was discussed in the current study. In case of fruit weight, maximum fruit weight (238g) was recorded in 15-year old sweet orange cv. Sallustiana cultivar while minimum fruit weight (142g) was recorded in 5-year young tree of Succari sweet orange fruit. The results of the fruit diameter showed that the maximum fruit diameter (77.142mm) was recorded in 15-year old Sallustiana orange but the minimum fruit diameter (66.046mm) was observed in 5-year young tree of sweet orange cv. Succari. The minimum value of rind thickness (4.142mm) was noted in 15-year old tree of cv. Red blood. On the other hand maximum value of rind thickness was observed in 5-year young tree of cv. Sallustiana. The data regarding total soluble solids (TSS), acidity (TA), TSS/TA, juice content, rind, flavedo thickness, pH and fruit diameter have also been discussed.Keywords: age, cultivars, fruit, quality, sweet orange (Citrus Sinensis L. Osbeck)
Procedia PDF Downloads 2287292 Persistent Ribosomal In-Frame Mis-Translation of Stop Codons as Amino Acids in Multiple Open Reading Frames of a Human Long Non-Coding RNA
Authors: Leonard Lipovich, Pattaraporn Thepsuwan, Anton-Scott Goustin, Juan Cai, Donghong Ju, James B. Brown
Abstract:
Two-thirds of human genes do not encode any known proteins. Aside from long non-coding RNA (lncRNA) genes with recently-discovered functions, the ~40,000 non-protein-coding human genes remain poorly understood, and a role for their transcripts as de-facto unconventional messenger RNAs has not been formally excluded. Ribosome profiling (Riboseq) predicts translational potential, but without independent evidence of proteins from lncRNA open reading frames (ORFs), ribosome binding of lncRNAs does not prove translation. Previously, we mass-spectrometrically documented translation of specific lncRNAs in human K562 and GM12878 cells. We now examined lncRNA translation in human MCF7 cells, integrating strand-specific Illumina RNAseq, Riboseq, and deep mass spectrometry in biological quadruplicates performed at two core facilities (BGI, China; City of Hope, USA). We excluded known-protein matches. UCSC Genome Browser-assisted manual annotation of imperfect (tryptic-digest-peptides)-to-(lncRNA-three-frame-translations) alignments revealed three peptides hypothetically explicable by 'stop-to-nonstop' in-frame replacement of stop codons by amino acids in two ORFs of the lncRNA MMP24-AS1. To search for this phenomenon genomewide, we designed and implemented a novel pipeline, matching tryptic-digest spectra to wildcard-instead-of-stop versions of repeat-masked, six-frame, whole-genome translations. Along with singleton putative stop-to-nonstop events affecting four other lncRNAs, we identified 24 additional peptides with stop-to-nonstop in-frame substitutions from multiple positive-strand MMP24-AS1 ORFs. Only UAG and UGA, never UAA, stop codons were impacted. All MMP24-AS1-matching spectra met the same significance thresholds as high-confidence known-protein signatures. Targeted resequencing of MMP24-AS1 genomic DNA and cDNA from the same samples did not reveal any mutations, polymorphisms, or sequencing-detectable RNA editing. This unprecedented apparent gene-specific violation of the genetic code highlights the importance of matching peptides to whole-genome, not known-genes-only, ORFs in mass-spectrometry workflows, and suggests a new mechanism enhancing the combinatorial complexity of the proteome. Funding: NIH Director’s New Innovator Award 1DP2-CA196375 to LL.Keywords: genetic code, lncRNA, long non-coding RNA, mass spectrometry, proteogenomics, ribo-seq, ribosome, RNAseq
Procedia PDF Downloads 2357291 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 1157290 An Explorative Research on the Cook and Stewards Employment: Turkish Flagged Ship's Perspective
Authors: Mehmet Yahsi, Ozkan Ugurlu
Abstract:
Cabin department among the stewards and cooks on ships, has an important place in terms of a sufficient and qualified nutrition of seafarers. From this perspective, ships must be employed with a sufficient number of cabin department. In this study, in order to research on the Turkish-flagged ships cook and stewards employment; Our national manning regulation compared with international regulations. The data used in this study were collected via visiting of the ships. 3000 gross tonnage and above engaged in international voyages 181 Turkish-flagged ship’s crew lists were compared with Minimum Safety Manning Certificates. According to the findings; employment rates, %95,6 cook, and %50,8 steward. According to the results of the study; Turkish-flagged ships, although it is not obliged to cook and steward, were employed on ships.Keywords: manning, cabin department, minimum safety manning certificate, Turkish flag
Procedia PDF Downloads 3977289 Effects of Deficit Watering and Potassium Fertigation on Growth and Yield Response of Cassava
Authors: Daniel O. Wasonga, Jouko Kleemola, Laura Alakukku, Pirjo Makela
Abstract:
Cassava (Manihot esculenta Crantz) is a major food crop for millions of people in the tropics. Growth and yield of cassava in the arid-tropics are seriously constrained by intermittent water deficit and low soil K content. Therefore, experiments were conducted to investigate the effects of interaction between water deficit and K fertigation on growth and yield response of biofortified cassava at early growth phase. Yellow cassava cultivar was grown under controlled glasshouse conditions in 5-L pots containing 1.7 kg of pre-fertilized potting mix. Plants were watered daily for 30 days after planting. Treatments were three watering levels (30%, severe water deficit; 60%, mild water deficit; 100%, well-watered), on which K (0.01, 1, 4, 16 and 32 mM) was split. Plants were harvested at 90 days after planting. Leaf area was smallest in plants grown with 30% watering and 0.01 mM K, and largest in plants grown with 100% watering and 32 mM K. Leaf, root, and total dry mass decreased in water-stressed plants. However, dry mass was markedly higher when plants were grown with 16 mM K under all watering levels in comparison to other K concentrations. The highest leaf, root and total dry mass were in plants with 100% watering and 16 mM K. In conclusion, K improved the growth of plants under water deficit and thus, K application on soils with low moisture and low K may improve the productivity of cassava.Keywords: dry mass, interaction, leaf area, Manihot esculenta
Procedia PDF Downloads 1177288 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing
Authors: Deepak Kumar, Debasish Deb, Reena Mamgain
Abstract:
Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)
Procedia PDF Downloads 4127287 Influence of Tactile Symbol Size on Its Perceptibility in Consideration of Effect of Aging
Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada
Abstract:
We conducted perception experiments on tactile symbols to elucidate the impact of the size of these letters on the level of perceptibility. This study was based on the accessible design perspective and aimed at expanding the availability of tactile symbols for the visually impaired who are unable to read Braille characters. In particular, this study targeted people with acquired visual impairments as users of the tactile symbols. The subjects (young and elderly individuals) in this study had normal vision. They were asked to participate in the experiments to identify tactile symbols while unable to see their hand during the experiments. This study investigated the relation between the size and perceptibility of tactile symbols based on an examination using test pieces of these letters in different sizes. The results revealed that the error rates for both young and elderly subjects converged to almost 0% when 12 mm size tactile symbols were used. The findings also showed that the error rate was low and subjects could identify the symbols in 5 s when 16 mm size tactile symbols were introduced.Keywords: accessible design, tactile sense, tactile symbols, bioinformatic
Procedia PDF Downloads 3517286 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method
Authors: Omer Oral, Y. Emre Yilmaz
Abstract:
Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization
Procedia PDF Downloads 1377285 Multiobjective Economic Dispatch Using Optimal Weighting Method
Authors: Mandeep Kaur, Fatehgarh Sahib
Abstract:
The purpose of economic load dispatch is to allocate the required load demand between the available generation units such that the cost of operation is minimized. It is an optimization problem to find the most economical schedule of the generating units while satisfying load demand and operational constraints. The multiobjective optimization problem in which the engineer’s goal is to maximize or minimize not a single objective function but several objective functions simultaneously. The purpose of multiobjective problems in the mathematical programming framework is to optimize the different objective functions. Many approaches and methods have been proposed in recent years to solve multiobjective optimization problems. Weighting method has been applied to convert multiobjective optimization problems into scalar optimization. MATLAB 7.10 has been used to write the code for the complete algorithm with the help of genetic algorithm (GA). The validity of the proposed method has been demonstrated on a three-unit power system.Keywords: economic load dispatch, genetic algorithm, generating units, multiobjective optimization, weighting method
Procedia PDF Downloads 1507284 Teleconsultations and The Need of Onsite Additional Medical Services
Authors: Cristina Hotoleanu
Abstract:
Introduction: The recent Covid-19 pandemic accelerated the development of e-health, including telemedicine, smartphone applications, and medical wearable devices. Providing remote teleconsultations supposes challenges which may require further face-to-face medical interactions. The aim of this study was to assess the correlation between the types of teleconsultations and the need of onsite medical services (investigations and medical visits) for the diagnosis and treatment. Methods: a retrospective study including all the teleconsultations using the platform offered by a telehealth provider in Romania (Telios Care SA) between May 1, 2021- April 30, 2022, was performed. Binary data were analysed using the chi-square test with a significance level of p < 0.05. Results: out of 7163 consultations, 3961 were phone calls, 1981 were online messages, and 1221 were video calls. Onsite medical services were indicated in 3327 (46.44%) cases; the onsite investigations or the onsite visits were recommended for 2908 patients as follows: 2326 in case of phone calls, 582 in case of online messages, none in case of video calls. Both onsite investigations and visits were indicated for 419 patients. The need for onsite additional medical services was significantly higher in the case of phone calls than in the other 2 types of teleconsultations (Chi square= 1207.06, p= 0.00001). The indication for onsite services was done mainly after teleconsultations covering medical specialties (87.34%), significantly higher than the other specialties (Chi square=914.59, p=0.00001). Teleconsultations in surgical specialties and other fields (pharmacy, dentistry, psychology, wellbeing- nutrition, fitness) resulted in 12.13%, respective less than 1%, indication for onsite investigations or visits, explained by using of video calls in most of the cases. Conclusion: a further onsite medical service was necessary in less than a half of the teleconsultations. This indication was done mainly after phone calls and teleconsultations in medical specialties. Video calls were used mostly in psychology, nutrition, and fitness teleconsultations and did not require a further onsite medical service. Other studies are necessary to assess better the types of teleconsultations and the specialties bringing the biggest benefit for the patients.Keywords: onsite medical services, phone calls, teleconsultations, telemedicine
Procedia PDF Downloads 1017283 Using Support Vector Machines for Measuring Democracy
Authors: Tommy Krieger, Klaus Gruendler
Abstract:
We present a novel approach for measuring democracy, which enables a very detailed and sensitive index. This method is based on Support Vector Machines, a mathematical algorithm for pattern recognition. Our implementation evaluates 188 countries in the period between 1981 and 2011. The Support Vector Machines Democracy Index (SVMDI) is continuously on the 0-1-Interval and robust to variations in the numerical process parameters. The algorithm introduced here can be used for every concept of democracy without additional adjustments, and due to its flexibility it is also a valuable tool for comparison studies.Keywords: democracy, democracy index, machine learning, support vector machines
Procedia PDF Downloads 3797282 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications
Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes
Abstract:
Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM
Procedia PDF Downloads 727281 Integrating Deterministic and Probabilistic Safety Assessment to Decrease Risk & Energy Consumption in a Typical PWR
Authors: Ebrahim Ghanbari, Mohammad Reza Nematollahi
Abstract:
Integrating deterministic and probabilistic safety assessment (IDPSA) is one of the most commonly used issues in the field of safety analysis of power plant accident. It has also been recognized today that the role of human error in creating these accidents is not less than systemic errors, so the human interference and system errors in fault and event sequences are necessary. The integration of these analytical topics will be reflected in the frequency of core damage and also the study of the use of water resources in an accident such as the loss of all electrical power of the plant. In this regard, the SBO accident was simulated for the pressurized water reactor in the deterministic analysis issue, and by analyzing the operator's behavior in controlling the accident, the results of the combination of deterministic and probabilistic assessment were identified. The results showed that the best performance of the plant operator would reduce the risk of an accident by 10%, as well as a decrease of 6.82 liters/second of the water sources of the plant.Keywords: IDPSA, human error, SBO, risk
Procedia PDF Downloads 1297280 Optimizing Load Shedding Schedule Problem Based on Harmony Search
Authors: Almahd Alshereef, Ahmed Alkilany, Hammad Said, Azuraliza Abu Bakar
Abstract:
From time to time, electrical power grid is directed by the National Electricity Operator to conduct load shedding, which involves hours' power outages on the area of this study, Southern Electrical Grid of Libya (SEGL). Load shedding is conducted in order to alleviate pressure on the National Electricity Grid at times of peak demand. This approach has chosen a set of categories to study load-shedding problem considering the effect of the demand priorities on the operation of the power system during emergencies. Classification of category region for load shedding problem is solved by a new algorithm (the harmony algorithm) based on the "random generation list of category region", which is a possible solution with a proximity degree to the optimum. The obtained results prove additional enhancements compared to other heuristic approaches. The case studies are carried out on SEGL.Keywords: optimization, harmony algorithm, load shedding, classification
Procedia PDF Downloads 3977279 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 607278 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital
Authors: Puntarikorn Rungrattanakasin
Abstract:
Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.Keywords: trigger tool, warfarin, risk of bleeding, medical wards
Procedia PDF Downloads 1487277 An Algorithm Based on Control Indexes to Increase the Quality of Service on Cellular Networks
Authors: Rahman Mofidi, Sina Rahimi, Farnoosh Darban
Abstract:
Communication plays a key role in today’s world, and to support it, the quality of service has the highest priority. It is very important to differentiate between traffic based on priority level. Some traffic classes should be a higher priority than other classes. It is also necessary to give high priority to customers who have more payment for better service, however, without influence on other customers. So to realize that, we will require effective quality of service methods. To ensure the optimal performance of the network in accordance with the quality of service is an important goal for all operators in the mobile network. In this work, we propose an algorithm based on control parameters which it’s based on user feedback that aims at minimizing the access to system transmit power and thus improving the network key performance indicators and increasing the quality of service. This feedback that is known as channel quality indicator (CQI) indicates the received signal level of the user. We aim at proposing an algorithm in control parameter criterion to study improving the quality of service and throughput in a cellular network at the simulated environment. In this work we tried to parameter values have close to their actual level. Simulation results show that the proposed algorithm improves the system throughput and thus satisfies users' throughput and improves service to set up a successful call.Keywords: quality of service, key performance indicators, control parameter, channel quality indicator
Procedia PDF Downloads 2037276 Lexical-Semantic Processing by Chinese as a Second Language Learners
Authors: Yi-Hsiu Lai
Abstract:
The present study aimed to elucidate the lexical-semantic processing for Chinese as second language (CSL) learners. Twenty L1 speakers of Chinese and twenty CSL learners in Taiwan participated in a picture naming task and a category fluency task. Based on their Chinese proficiency levels, these CSL learners were further divided into two sub-groups: ten CSL learners of elementary Chinese proficiency level and ten CSL learners of intermediate Chinese proficiency level. Instruments for the naming task were sixty black-and-white pictures: thirty-five object pictures and twenty-five action pictures. Object pictures were divided into two categories: living objects and non-living objects. Action pictures were composed of two categories: action verbs and process verbs. As in the naming task, the category fluency task consisted of two semantic categories – objects (i.e., living and non-living objects) and actions (i.e., action and process verbs). Participants were asked to report as many items within a category as possible in one minute. Oral productions were tape-recorded and transcribed for further analysis. Both error types and error frequency were calculated. Statistical analysis was further conducted to examine these error types and frequency made by CSL learners. Additionally, category effects, pictorial effects and L2 proficiency were discussed. Findings in the present study helped characterize the lexical-semantic process of Chinese naming in CSL learners of different Chinese proficiency levels and made contributions to Chinese vocabulary teaching and learning in the future.Keywords: lexical-semantic processing, Mandarin Chinese, naming, category effects
Procedia PDF Downloads 4627275 Batch and Fixed-Bed Studies of Ammonia Treated Coconut Shell Activated Carbon for Adsorption of Benzene and Toluene
Authors: Jibril Mohammed, Usman Dadum Hamza, Muhammad Idris Misau, Baba Yahya Danjuma, Yusuf Bode Raji, Abdulsalam Surajudeen
Abstract:
Volatile organic compounds (VOCs) have been reported to be responsible for many acute and chronic health effects and environmental degradations such as global warming. In this study, a renewable and low-cost coconut shell activated carbon (PHAC) was synthesized and treated with ammonia (PHAC-AM) to improve its hydrophobicity and affinity towards VOCs. Removal efficiencies and adsorption capacities of the ammonia treated activated carbon (PHAC-AM) for benzene and toluene were carried out through batch and fixed-bed studies respectively. Langmuir, Freundlich and Tempkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least fitted by Tempkin model; the favourability and suitability of fitness were validated by equilibrium parameter (RL) and the root square mean deviation (RSMD). Judging by the deviation of the predicted values from the experimental values, pseudo-second-order kinetic model best described the adsorption kinetics than the pseudo-first-order kinetic model for the two VOCs on PHAC and PHAC-AM. In the fixed-bed study, the effect of initial VOC concentration, bed height and flow rate on benzene and toluene adsorption were studied. The highest bed capacities of 77.30 and 69.40 mg/g were recorded for benzene and toluene respectively; at 250 mg/l initial VOC concentration, 2.5 cm bed height and 4.5 ml/min flow rate. The results of this study revealed that ammonia treated activate carbon (PHAC-AM) is a sustainable adsorbent for treatment of VOCs in polluted waters.Keywords: volatile organic compounds, equilibrium and kinetics studies, batch and fixed bed study, bio-based activated carbon
Procedia PDF Downloads 2267274 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network
Authors: Harshit Mittal, Neeraj Garg
Abstract:
Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network
Procedia PDF Downloads 657273 Projection of Climate Change over the Upper Ping River Basin Using Regional Climate Model
Authors: Chakrit Chotamonsak, Eric P. Salathé Jr, Jiemjai Kreasuwan
Abstract:
Dynamical downscaling of the ECHAM5 global climate model is applied at 20-km horizontal resolution using the WRF regional climate model (WRF-ECHAM5), to project changes from 1990–2009 to 2045–2064 of temperature and precipitation over the Upper Ping River Basin. The analysis found that monthly changes in daily temperature and precipitation over the basin for the 2045-2064 compared to the 1990-2009 are revealed over the basin all months, with the largest warmer in December and the smallest warmer in February. The future simulated precipitation is smaller than that of the baseline value in May, July and August, while increasing of precipitation is revealed during pre-monsoon (April) and late monsoon (September and October). This means that the rainy season likely becomes longer and less intensified during the rainy season. During the cool-dry season and hot-dry season, precipitation is substantial increasing over the basin. For the annual cycle of changes in daily temperature and precipitation over the upper Ping River basin, the largest warmer in the mean temperature over the basin is 1.93 °C in December and the smallest is 0.77 °C in February. Increase in nighttime temperature (minimum temperature) is larger than that of daytime temperature (maximum temperature) during the dry season, especially in wintertime (November to February), resulted in decreasing the diurnal temperature range. The annual and seasonal changes in daily temperature and precipitation averaged over the basin. The annual mean rising are 1.43, 1.54 and 1.30 °C for mean temperature, maximum temperature and minimum temperature, respectively. The increasing of maximum temperature is larger than that of minimum temperature in all months during the dry season (November to April).Keywords: climate change, regional climate model, upper Ping River basin, WRF
Procedia PDF Downloads 3837272 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 1407271 Inversion of Electrical Resistivity Data: A Review
Authors: Shrey Sharma, Gunjan Kumar Verma
Abstract:
High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.Keywords: inversion, limitations, optimization, resistivity
Procedia PDF Downloads 3657270 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument
Authors: Boyun Lee
Abstract:
This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate
Procedia PDF Downloads 1727269 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm
Authors: Belgherbi Aicha, Bessaid Abdelhafid
Abstract:
In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 3257268 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: deep learning, artificial neural networks, energy price forecasting, turkey
Procedia PDF Downloads 2927267 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media
Authors: Jinghui Peng, Shanyu Tang, Jia Li
Abstract:
Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.Keywords: steganalysis, security, Fast Fourier Transform, streaming media
Procedia PDF Downloads 147