Search results for: packet loss probability estimation
6019 The Study of Periodontal Health Status in Menopausal Women with Osteoporosis Referred to Rheumatology Clinics in Yazd and Healthy People
Authors: Mahboobe Daneshvar
Abstract:
Introduction: Clinical studies on the effect of systemic conditions on periodontal diseases have shown that some systemic deficiencies may provide grounds for the onset of periodontal diseases. One of these systemic problems is osteoporosis, which may be a risk factor for the onset and exacerbation of periodontitis. This study tends to evaluate periodontal indices in osteoporotic menopausal women and compare them with healthy controls. Materials and Methods: In this case-control study, participants included 45-75-year-old menopausal women referred to rheumatology wards of the Khatamolanbia Clinic and Shahid Sadoughi Hospital in Yazd; Their bone density was determined by DEXA-scan and by imaging the femoral-lumbar bone. Thirty patients with osteoporosis and 30 subjects with normal BMD were selected. Then, informed consent was obtained for participation in the study. During the clinical examinations, tooth loss (TL), plaque index (PI), gingival recession, pocket probing depth (PPD), clinical attachment loss (CAL), and tooth mobility (TM) were measured to evaluate the periodontal status. These clinical examinations were performed to determine the periodontal status by catheter, mirror and probe. Results: During the evaluation, there was no significant difference in PPD, PI, TM, gingival recession, and CAL between case and control groups (P-value>0.05); that is, osteoporosis has no effect on the above factors. These periodontal factors are almost the same in both healthy and patient groups. In the case of missing teeth, the following results were obtained: the mean of missing teeth was 22.173% of the total teeth in the case group and 18.583% of the total teeth in the control group. In the study of the missing teeth in the case and control groups, there was a significant relationship between case and control groups (P-value = 0.025). Conclusion: In fact, since periodontal disease is multifactorial and microbial plaque is the main cause, osteoporosis is considered a predisposing factor in exacerbation or persistence of periodontal disease. In patients with osteoporosis, usually pathological fractures, hormonal changes, and aging lead to reduced physical activity and affect oral health, which leads to the manifestation of periodontal disease. But this disease increases tooth loss by changing the shape and structure of bone trabeculae and weakening them. Osteoporosis does not seem to be a deterministic factor in the incidence of periodontal disease, since it affects bone quality rather than bone quantity.Keywords: plaque index, Osteoporosis, tooth mobility, periodontal packet
Procedia PDF Downloads 726018 Knowledge Loss Risk Assessment for Departing Employees: An Exploratory Study
Authors: Muhammad Saleem Ullah Khan Sumbal, Eric Tsui, Ricky Cheong, Eric See To
Abstract:
Organizations are posed to a threat of valuable knowledge loss when employees leave either due to retirement, resignation, job change or because of disabilities e.g. death, etc. Due to changing economic conditions, globalization, and aging workforce, organizations are facing challenges regarding retention of valuable knowledge. On the one hand, large number of employees are going to retire in the organizations whereas on the other hand, younger generation does not want to work in a company for a long time and there is an increasing trend of frequent job change among the new generation. Because of these factors, organizations need to make sure that they capture the knowledge of employee before (s)he walks out of the door. The first step in this process is to know what type of knowledge employee possesses and whether this knowledge is important for the organization. Researchers reveal in the literature that despite the serious consequences of knowledge loss in terms of organizational productivity and competitive advantage, there has not been much work done in the area of knowledge loss assessment of departing employees. An important step in the knowledge retention process is to determine the critical ‘at risk’ knowledge. Thus, knowledge loss risk assessment is a process by which organizations can gauge the importance of knowledge of the departing employee. The purpose of this study is to explore this topic of knowledge loss risk assessment by conducting a qualitative study in oil and gas sector. By engaging in dialogues with managers and executives of the organizations through in-depth interviews and adopting a grounded methodology approach, the research will explore; i) Are there any measures adopted by organizations to assess the risk of knowledge loss from departing employees? ii) Which factors are crucial for knowledge loss assessment in the organizations? iii) How can we prioritize the employees for knowledge retention according to their criticality? Grounded theory approach is used when there is not much knowledge available in the area under research and thus new knowledge is generated about the topic through an in-depth exploration of the topic by using methods such as interviews and using a systematic approach to analyze the data. The outcome of the study will generate a model for the risk of knowledge loss through factors such as the likelihood of knowledge loss, the consequence/impact of knowledge loss and quality of the knowledge loss of departing employees. Initial results show that knowledge loss assessment is quite crucial for the organizations and it helps in determining what types of knowledge employees possess e.g. organizations knowledge, subject matter expertise or relationships knowledge. Based on that, it can be assessed which employee is more important for the organizations and how to prioritize the knowledge retention process for departing employees.Keywords: knowledge loss, risk assessment, departing employees, Hong Kong organizations
Procedia PDF Downloads 4086017 An Energy Holes Avoidance Routing Protocol for Underwater Wireless Sensor Networks
Authors: A. Khan, H. Mahmood
Abstract:
In Underwater Wireless Sensor Networks (UWSNs), sensor nodes close to water surface (final destination) are often preferred for selection as forwarders. However, their frequent selection makes them depleted of their limited battery power. In consequence, these nodes die during early stage of network operation and create energy holes where forwarders are not available for packets forwarding. These holes severely affect network throughput. As a result, system performance significantly degrades. In this paper, a routing protocol is proposed to avoid energy holes during packets forwarding. The proposed protocol does not require the conventional position information (localization) of holes to avoid them. Localization is cumbersome; energy is inefficient and difficult to achieve in underwater environment where sensor nodes change their positions with water currents. Forwarders with the lowest water pressure level and the maximum number of neighbors are preferred to forward packets. These two parameters together minimize packet drop by following the paths where maximum forwarders are available. To avoid interference along the paths with the maximum forwarders, a packet holding time is defined for each forwarder. Simulation results reveal superior performance of the proposed scheme than the counterpart technique.Keywords: energy holes, interference, routing, underwater
Procedia PDF Downloads 4086016 Recent Advancement in Fetal Electrocardiogram Extraction
Authors: Savita, Anurag Sharma, Harsukhpreet Singh
Abstract:
Fetal Electrocardiogram (fECG) is a widely used technique to assess the fetal well-being and identify any changes that might be with problems during pregnancy and to evaluate the health and conditions of the fetus. Various techniques or methods have been employed to diagnose the fECG from abdominal signal. This paper describes the facile approach for the estimation of the fECG known as Adaptive Comb. Filter (ACF). The ACF can adjust according to the temporal variations in fundamental frequency by itself that used for the estimation of the quasi periodic signal of ECG signal.Keywords: aECG, ACF, fECG, mECG
Procedia PDF Downloads 4086015 Methods for Restricting Unwanted Access on the Networks Using Firewall
Authors: Bhagwant Singh, Sikander Singh Cheema
Abstract:
This paper examines firewall mechanisms routinely implemented for network security in depth. A firewall can't protect you against all the hazards of unauthorized networks. Consequently, many kinds of infrastructure are employed to establish a secure network. Firewall strategies have already been the subject of significant analysis. This study's primary purpose is to avoid unnecessary connections by combining the capability of the firewall with the use of additional firewall mechanisms, which include packet filtering and NAT, VPNs, and backdoor solutions. There are insufficient studies on firewall potential and combined approaches, but there aren't many. The research team's goal is to build a safe network by integrating firewall strength and firewall methods. The study's findings indicate that the recommended concept can form a reliable network. This study examines the characteristics of network security and the primary danger, synthesizes existing domestic and foreign firewall technologies, and discusses the theories, benefits, and disadvantages of different firewalls. Through synthesis and comparison of various techniques, as well as an in-depth examination of the primary factors that affect firewall effectiveness, this study investigated firewall technology's current application in computer network security, then introduced a new technique named "tight coupling firewall." Eventually, the article discusses the current state of firewall technology as well as the direction in which it is developing.Keywords: firewall strategies, firewall potential, packet filtering, NAT, VPN, proxy services, firewall techniques
Procedia PDF Downloads 1016014 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 3236013 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models
Authors: Chad Goldsworthy, B. Rajeswari Matam
Abstract:
The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation
Procedia PDF Downloads 1906012 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images
Authors: Qiang Wang, Hongyang Yu
Abstract:
Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations
Procedia PDF Downloads 796011 Presentation of a Mix Algorithm for Estimating the Battery State of Charge Using Kalman Filter and Neural Networks
Authors: Amin Sedighfar, M. R. Moniri
Abstract:
Determination of state of charge (SOC) in today’s world becomes an increasingly important issue in all the applications that include a battery. In fact, estimation of the SOC is a fundamental need for the battery, which is the most important energy storage in Hybrid Electric Vehicles (HEVs), smart grid systems, drones, UPS and so on. Regarding those applications, the SOC estimation algorithm is expected to be precise and easy to implement. This paper presents an online method for the estimation of the SOC of Valve-Regulated Lead Acid (VRLA) batteries. The proposed method uses the well-known Kalman Filter (KF), and Neural Networks (NNs) and all of the simulations have been done with MATLAB software. The NN is trained offline using the data collected from the battery discharging process. A generic cell model is used, and the underlying dynamic behavior of the model has used two capacitors (bulk and surface) and three resistors (terminal, surface, and end), where the SOC determined from the voltage represents the bulk capacitor. The aim of this work is to compare the performance of conventional integration-based SOC estimation methods with a mixed algorithm. Moreover, by containing the effect of temperature, the final result becomes more accurate.Keywords: Kalman filter, neural networks, state-of-charge, VRLA battery
Procedia PDF Downloads 1926010 Pure Economic Loss: A Trouble Child
Authors: Isabel Mousinho de Figueiredo
Abstract:
Pure economic loss can be brought into the 21st century and become a useful tool to keep the tort of negligence within reasonable limits, provided the concept is minutely reexamined. The term came about when wealth was physical, and Law wanted to be a modern science. As a tool to draw the line, it leads to satisfactory decisions in most cases, but needlessly creates distressing conundrums in others, and these are the ones parties bother to litigate about. Economic loss is deemed to be pure based on a blind negative criterion of physical harm, that inadvertently smelts vastly disparate problems into an indiscernible mass, with arbitrary outcomes. These shortcomings are usually dismissed as minor byproducts, for the lack of a better formula. Law could instead stick to the sound paradigms of the intended rule, and be more specific in identifying the losses deserving of compensation. This would provide a better service to Bench and Bar, and effectively assist everyone navigating the many challenges of Accident Law.Keywords: accident law, comparative tort law, negligence, pure economic loss
Procedia PDF Downloads 1166009 Synthesis and Electromagnetic Property of Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄ Grafted with Polyaniline Fibers
Authors: Jintang Zhou, Zhengjun Yao, Tiantian Yao
Abstract:
Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄(LZFO) grafted with polyaniline (PANI) fibers was synthesized by in situ polymerization. FTIR, XRD, SEM, and vector network analyzer were used to investigate chemical composition, micro-morphology, electromagnetic properties and microwave absorbing properties of the composite. The results show that PANI fibers were grafted on the surfaces of LZFO particles. The reflection loss exceeds 10 dB in the frequency range from 2.5 to 5 GHz and from 15 to 17GHz, and the maximum reflection loss reaches -33 dB at 15.9GHz. The enhanced microwave absorption properties of LZFO/PANI-fiber composites are mainly ascribed to the combined effect of both dielectric loss and magnetic loss and the improved impedance matching.Keywords: Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄, polyaniline, electromagnetic properties, microwave absorbing properties
Procedia PDF Downloads 4306008 Mutations in the GJB2 Gene Are the Cause of an Important Number of Non-Syndromic Deafness Cases
Authors: Habib Onsori, Somayeh Akrami, Mohammad Rahmati
Abstract:
Deafness is the most common sensory disorder with the frequency of 1/1000 in many populations. Mutations in the GJB2 (CX26) gene at the DFNB1 locus on chromosome 13q12 are associated with congenital hearing loss. Approximately 80% of congenital hearing loss cases are recessively inherited and 15% dominantly inherited. Mutations of the GJB2 gene, encoding gap junction protein Connexin 26 (Cx26), are the most common cause of hereditary congenital hearing loss in many countries. This report presents two cases of different mutations from Iranian patients with bilateral hearing loss. DNA studies were performed for the GJB2 gene by PCR and sequencing methods. In one of them, direct sequencing of the gene showed a heterozygous T→C transition at nucleotide 604 resulting in a cysteine to arginine amino acid substitution at codon 202 (C202R) in the fourth extracellular domain (TM4) of the protein. The analyses indicate that the C202R mutation appeared de novo in the proband with a possible dominant effect (GenBank: KF 638275). In the other one, DNA sequencing revealed a compound heterozygous mutation (35delG, 363delC) in the Cx26 gene that is strongly associated with congenital non-syndromic hearing loss (NSHL). So screening the mutations for hearing loss individuals referring to genetics counseling centers before marriage and or pregnancy is recommended.Keywords: CX26, deafness, GJB2, mutation
Procedia PDF Downloads 4876007 Improved Estimation Strategies of Sensitive Characteristics Using Scrambled Response Techniques in Successive Sampling
Authors: S. Suman, G. N. Singh
Abstract:
This research work is an effort to analyse the consequences of scrambled response technique to estimate the current population mean in two-occasion successive sampling when the characteristic of interest is sensitive in nature. The generalized estimation procedures have been proposed using sensitive auxiliary variables under additive and multiplicative scramble models. The properties of resultant estimators have been deeply examined. Simulation, as well as empirical studies, are carried out to evaluate the performances of the proposed estimators with respect to other competent estimators. The results of our studies suggest that the proposed estimation procedures are highly effective under the presence of non-response situation. The result of this study also suggests that additive scrambled response model is a better choice in the perspective of cost of the survey and privacy of the respondents.Keywords: scrambled response, sensitive characteristic, successive sampling, optimum replacement strategy
Procedia PDF Downloads 1776006 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
Authors: Zviad Ghadua, Biswa Bhattacharya
Abstract:
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.Keywords: flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina
Procedia PDF Downloads 1366005 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.Keywords: SEBS, remote sensing, evapotranspiration, ETa
Procedia PDF Downloads 3336004 Determining the Effects of Wind-Aided Midge Movement on the Probability of Coexistence of Multiple Bluetongue Virus Serotypes in Patchy Environments
Authors: Francis Mugabi, Kevin Duffy, Joseph J. Y. T Mugisha, Obiora Collins
Abstract:
Bluetongue virus (BTV) has 27 serotypes, with some of them coexisting in patchy (different) environments, which make its control difficult. Wind-aided midge movement is a known mechanism in the spread of BTV. However, its effects on the probability of coexistence of multiple BTV serotypes are not clear. Deterministic and stochastic models for r BTV serotypes in n discrete patches connected by midge and/or cattle movement are formulated and analyzed. For the deterministic model without midge and cattle movement, using the comparison principle, it is shown that if the patch reproduction number R0 < 1, i=1,2,...,n, j=1,2,...,r, all serotypes go extinct. If R^j_i0>1, competitive exclusion takes place. Using numerical simulations, it is shown that when the n patches are connected by midge movement, coexistence takes place. To account for demographic and movement variability, the deterministic model is transformed into a continuous-time Markov chain stochastic model. Utilizing a multitype branching process, it is shown that the midge movement can have a large effect on the probability of coexistence of multiple BTV serotypes. The probability of coexistence can be brought to zero when the control interventions that directly kill the adult midges are applied. These results indicate the significance of wind-aided midge movement and vector control interventions on the coexistence and control of multiple BTV serotypes in patchy environments.Keywords: bluetongue virus, coexistence, multiple serotypes, midge movement, branching process
Procedia PDF Downloads 1506003 Algorithm Research on Traffic Sign Detection Based on Improved EfficientDet
Authors: Ma Lei-Lei, Zhou You
Abstract:
Aiming at the problems of low detection accuracy of deep learning algorithm in traffic sign detection, this paper proposes improved EfficientDet based traffic sign detection algorithm. Multi-head self-attention is introduced in the minimum resolution layer of the backbone of EfficientDet to achieve effective aggregation of local and global depth information, and this study proposes an improved feature fusion pyramid with increased vertical cross-layer connections, which improves the performance of the model while introducing a small amount of complexity, the Balanced L1 Loss is introduced to replace the original regression loss function Smooth L1 Loss, which solves the problem of balance in the loss function. Experimental results show, the algorithm proposed in this study is suitable for the task of traffic sign detection. Compared with other models, the improved EfficientDet has the best detection accuracy. Although the test speed is not completely dominant, it still meets the real-time requirement.Keywords: convolutional neural network, transformer, feature pyramid networks, loss function
Procedia PDF Downloads 976002 Behavior Loss Aversion Experimental Laboratory of Financial Investments
Authors: Jihene Jebeniani
Abstract:
We proposed an approach combining both the techniques of experimental economy and the flexibility of discrete choice models in order to test the loss aversion. Our main objective was to test the loss aversion of the Cumulative Prospect Theory (CPT). We developed an experimental laboratory in the context of the financial investments that aimed to analyze the attitude towards the risk of the investors. The study uses the lotteries and is basing on econometric modeling. The estimated model was the ordered probit.Keywords: risk aversion, behavioral finance, experimental economic, lotteries, cumulative prospect theory
Procedia PDF Downloads 4716001 Comparative Analysis of Spectral Estimation Methods for Brain-Computer Interfaces
Authors: Rafik Djemili, Hocine Bourouba, M. C. Amara Korba
Abstract:
In this paper, we present a method in order to classify EEG signals for Brain-Computer Interfaces (BCI). EEG signals are first processed by means of spectral estimation methods to derive reliable features before classification step. Spectral estimation methods used are standard periodogram and the periodogram calculated by the Welch method; both methods are compared with Logarithm of Band Power (logBP) features. In the method proposed, we apply Linear Discriminant Analysis (LDA) followed by Support Vector Machine (SVM). Classification accuracy reached could be as high as 85%, which proves the effectiveness of classification of EEG signals based BCI using spectral methods.Keywords: brain-computer interface, motor imagery, electroencephalogram, linear discriminant analysis, support vector machine
Procedia PDF Downloads 4996000 Estimation of the State of Charge of the Battery Using EFK and Sliding Mode Observer in MATLAB-Arduino/Labview
Authors: Mouna Abarkan, Abdelillah Byou, Nacer M'Sirdi, El Hossain Abarkan
Abstract:
This paper presents the estimation of the state of charge of the battery using two types of observers. The battery model used is the combination of a voltage source, which is the open circuit battery voltage of a strength corresponding to the connection of resistors and electrolyte and a series of parallel RC circuits representing charge transfer phenomena and diffusion. An adaptive observer applied to this model is proposed, this observer to estimate the battery state of charge of the battery is based on EFK and sliding mode that is known for their robustness and simplicity implementation. The results are validated by simulation under MATLAB/Simulink and implemented in Arduino-LabView.Keywords: model of the battery, adaptive sliding mode observer, the EFK observer, estimation of state of charge, SOC, implementation in Arduino/LabView
Procedia PDF Downloads 3045999 Avian and Rodent Pest Infestations of Lowland Rice (Oryza sativa L.) and Evaluation of Attributable Losses in Savanna Transition Environment
Authors: Okwara O. S., Osunsina I. O. O., Pitan O. R., Afolabi C. G.
Abstract:
Rice (Oryza sativa L.) belongs to the family poaceae and has become the most popular food. Globally, this crop is been faced with the menace of vertebrate pests, of which birds and rodents are the most implicated. The study avian and rodents’ infestations and the evaluation of attributable losses was carried out in 2020 and 2021 with the objectives of identifying the types of bird and rodent species associated with lowland rice and to determine the infestation levels, damage intensity, and the crop loss induced by these pests. The experiment was laid out in a split plot arrangement fitted into a Randomized Complete Block Design (RCBD), with the main plots being protected and unprotected groups and the sub-plots being four rice varieties, Ofada, WITA-4, NERICA L-34, and Arica-3. Data collection was done over a 16-week period, and the data obtained were transformed using square root transformation model before Analysis of Variance (ANOVA) was done at 5% probability level. The results showed the infestation levels of both birds and rodents across all the treatment means of thevarieties as not significantly different (p > 0.05) in both seasons. The damage intensity by these pests in both years were also not significantly different (p > 0.05) among the means of the varieties, which explains the diverse feeding nature of birds and rodents when it comes to infestations. The infestation level under the protected group was significantly lower (p < 0.05) than the infestation level recorded under the unprotected group.Consequently, an estimated crop loss of 91.94 % and 90.75 % were recorded in 2020 and 2021, respectively, andthe identified pest birds were Ploceus melanocephalus, Ploceus cuculatus, and Spermestes cucullatus. Conclusively, vertebrates pest cause damage to lowland rice which could result to a high percentage crop loss if left uncontrolled.Keywords: pests, infestations, evaluation, losses, rodents, avian
Procedia PDF Downloads 1255998 A Novel Approach to Design of EDDR Architecture for High Speed Motion Estimation Testing Applications
Authors: T. Gangadhararao, K. Krishna Kishore
Abstract:
Motion Estimation (ME) plays a critical role in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing Elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty.Keywords: area overhead, data recovery, error detection, motion estimation, reliability, residue-and-quotient (RQ) code
Procedia PDF Downloads 4315997 Moderating Effect of Owner's Influence on the Relationship between the Probability of Client Failure and Going Concern Opinion Issuance
Authors: Mohammad Noor Hisham Osman, Ahmed Razman Abdul Latiff, Zaidi Mat Daud, Zulkarnain Muhamad Sori
Abstract:
The problem that Malaysian auditors do not issue going concern opinion (GC opinion) to seriously financially distressed companies is still a pressing issue. Policy makers, particularly the Financial Statement Review Committee (FSRC) of Malaysian Institute of Accountant, have raised this issue as early as in 2009. Similar problem happened in the US, UK, and many developing countries. It is important for auditors to issue GC opinion properly because such opinion is one signal about the viability of a company much needed by stakeholders. There are at least two unanswered questions or research gaps in the literature on determinants of GC opinion. Firstly, is client’s probability of failure associated with GC opinion issuance? Secondly, to what extent influential owners (management, family, and institution) moderate the association between client probability of failure and GC opinion issuance. The objective of this study is, therefore, twofold; (1) To examine the extent of the relationship between the probability of client failure and the issuance of GC opinion and (2) To examine the level of management, family, and institutional ownerships moderate the association between client probability of failure and the issuance of GC opinion. This study is quantitative in nature, and the sources of data are secondary (mainly company’s annual reports). A total of four hypotheses have been developed and tested on data accumulated from annual reports of seriously financially distressed Malaysian public listed companies. Data from 2006 to 2012 on a sample of 644 observations have been analyzed using panel logistic regression. It is found that certainty (rather than probability) of client failure affects the issuance of GC opinion. In addition, it is found that only the level of family ownership does positively moderate the relationship between client probability of failure and GC opinion issuance. This study is a contribution to auditing literature as its findings can enhance our understanding about audit quality; particularly on the variables that are associated with the issuance of GC opinion. The findings of this study shed light on the roles family owners in GC opinion issuance process, and this would open ways for the researcher to suggest measures that can be used to tackle the problem of auditors do not want to issue GC opinion to financially distressed clients. The measures to be suggested can be useful to policy makers in formulating future promulgations.Keywords: audit quality, auditing, auditor characteristics, going concern opinion, Malaysia
Procedia PDF Downloads 2605996 Reasons for the Slow Uptake of Embodied Carbon Estimation in the Sri Lankan Building Sector
Authors: Amalka Nawarathna, Nirodha Fernando, Zaid Alwan
Abstract:
Global carbon reduction is not merely a responsibility of environmentally advanced developed countries, but also a responsibility of developing countries regardless of their less impact on global carbon emissions. In recognition of that, Sri Lanka as a developing country has initiated promoting green building construction as one reduction strategy. However, notwithstanding the increasing attention on Embodied Carbon (EC) reduction in the global building sector, they still mostly focus on Operational Carbon (OC) reduction (through improving operational energy). An adequate attention has not yet been given on EC estimation and reduction. Therefore, this study aims to identify the reasons for the slow uptake of EC estimation in the Sri Lankan building sector. To achieve this aim, 16 numbers of global barriers to estimate EC were identified through existing literature. They were then subjected to a pilot survey to identify the significant reasons for the slow uptake of EC estimation in the Sri Lankan building sector. A questionnaire with a three-point Likert scale was used to this end. The collected data were analysed using descriptive statistics. The findings revealed that 11 out of 16 challenges/ barriers are highly relevant as reasons for the slow uptake in estimating EC in buildings in Sri Lanka while the other five challenges/ barriers remain as moderately relevant reasons. Further, the findings revealed that there are no low relevant reasons. Eventually, the paper concluded that all the known reasons are significant to the Sri Lankan building sector and it is necessary to address them in order to upturn the attention on EC reduction.Keywords: embodied carbon emissions, embodied carbon estimation, global carbon reduction, Sri Lankan building sector
Procedia PDF Downloads 2065995 Cognitive Relaying in Interference Limited Spectrum Sharing Environment: Outage Probability and Outage Capacity
Authors: Md Fazlul Kader, Soo Young Shin
Abstract:
In this paper, we consider a cognitive relay network (CRN) in which the primary receiver (PR) is protected by peak transmit power $\bar{P}_{ST}$ and/or peak interference power Q constraints. In addition, the interference effect from the primary transmitter (PT) is considered to show its impact on the performance of the CRN. We investigate the outage probability (OP) and outage capacity (OC) of the CRN by deriving closed-form expressions over Rayleigh fading channel. Results show that both the OP and OC improve by increasing the cooperative relay nodes as well as when the PT is far away from the SR.Keywords: cognitive relay, outage, interference limited, decode-and-forward (DF)
Procedia PDF Downloads 5115994 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz
Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed
Abstract:
This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.Keywords: on-body channel communications, fading characteristics, statistical model, body movement
Procedia PDF Downloads 3555993 The Analysis of Loss-of-Excitation Algorithm for Synchronous Generators
Authors: Pavle Dakić, Dimitrije Kotur, Zoran Stojanović
Abstract:
This paper presents the results of the study in which the excitation system fault of synchronous generator is simulated. In a case of excitation system fault (loss of field), distance relay is used to prevent further damage. Loss-of-field relay calculates complex impedance using measured voltage and current at the generator terminals. In order to obtain phasors from sampled measured values, discrete Fourier transform is used. All simulations are conducted using Matlab and Simulink software package. The analysis is conducted on the two machine system which supplies equivalent load. While simulating loss of excitation on one generator in different conditions (at idle operation, weakly loaded, and fully loaded), diagrams of active power, reactive power, and measured impedance are analyzed and monitored. Moreover, in the simulations, the effect of generator load on relay tripping time is investigated. In conclusion, the performed tests confirm that the fault in the excitation system can be detected by measuring the impedance.Keywords: loss-of-excitation, synchronous generator, distance protection, Fourier transformation
Procedia PDF Downloads 3315992 Macular Ganglion Cell Inner Plexiform Layer Thinning
Authors: Hye-Young Shin, Chan Kee Park
Abstract:
Background: To compare the thinning patterns of the ganglion cell-inner plexiform layer (GCIPL) and peripapillary retinal nerve fiber layer (pRNFL) as measured using Cirrus high-definition optical coherence tomography (HD-OCT) in patients with visual field (VF) defects that respect the vertical meridian. Methods: Twenty eyes of eleven patients with VF defects that respect the vertical meridian were enrolled retrospectively. The thicknesses of the macular GCIPL and pRNFL were measured using Cirrus HD-OCT. The 5% and 1% thinning area index (TAI) was calculated as the proportion of abnormally thin sectors at the 5% and 1% probability level within the area corresponding to the affected VF. The 5% and 1% TAI were compared between the GCIPL and pRNFL measurements. Results: The color-coded GCIPL deviation map showed a characteristic vertical thinning pattern of the GCIPL, which is also seen in the VF of patients with brain lesions. The 5% and 1% TAI were significantly higher in the GCIPL measurements than in the pRNFL measurements (all P < 0.01). Conclusions: Macular GCIPL analysis clearly visualized a characteristic topographic pattern of retinal ganglion cell (RGC) loss in patients with VF defects that respect the vertical meridian, unlike pRNFL measurements. Macular GCIPL measurements provide more valuable information than pRNFL measurements for detecting the loss of RGCs in patients with retrograde degeneration of the optic nerve fibers.Keywords: brain lesion, macular ganglion cell, inner plexiform layer, spectral-domain optical coherence tomography
Procedia PDF Downloads 3375991 Development of Visual Working Memory Precision: A Cross-Sectional Study of Simultaneously Delayed Responses Paradigm
Authors: Yao Fu, Xingli Zhang, Jiannong Shi
Abstract:
Visual working memory (VWM) capacity is the ability to maintain and manipulate short-term information which is not currently available. It is well known for its significance to form the basis of numerous cognitive abilities and its limitation in holding information. VWM span, the most popular measurable indicator, is found to reach the adult level (3-4 items) around 12-13 years’ old, while less is known about the precision development of the VWM capacity. By using simultaneously delayed responses paradigm, the present study investigates the development of VWM precision among 6-18-year-old children and young adults, besides its possible relationships with fluid intelligence and span. Results showed that precision and span both increased with age, and precision reached the maximum in 16-17 age-range. Moreover, when remembering 3 simultaneously presented items, the probability of remembering target item correlated with fluid intelligence and the probability of wrap errors (misbinding target and non-target items) correlated with age. When remembering more items, children had worse performance than adults due to their wrap errors. Compared to span, VWM precision was effective predictor of intelligence even after controlling for age. These results suggest that unlike VWM span, precision developed in a slow, yet longer fashion. Moreover, decreasing probability of wrap errors might be the main reason for the development of precision. Last, precision correlated more closely with intelligence than span in childhood and adolescence, which might be caused by the probability of remembering target item.Keywords: fluid intelligence, precision, visual working memory, wrap errors
Procedia PDF Downloads 2765990 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 32