Search results for: standard error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6566

Search results for: standard error

5696 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 114
5695 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 80
5694 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event

Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani

Abstract:

A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.

Keywords: chain shot, timber harvesters, safety, testing

Procedia PDF Downloads 137
5693 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 525
5692 The Effectiveness of Warm-Water Footbath on Fatigue in Cancer Patient Undergoing Chemotherapy

Authors: Yu-Wen Lin, Li-Ni Liu

Abstract:

Introduction: Fatigue is the most common symptoms experienced by cancer patients undergoing chemotherapy. Patients receiving anticancer therapies develop a higher proportion of fatigue compared with patients who do not receive anticancer therapies. Fatigue has significant impacts on quality of life, daily activities, mood status, and social behaviors. A warm-water footbath (WWF) at 41℃ promotes circulation and removes metabolites resulting in improving sleep and relieving fatigue. The aim of this study is to determine the effectiveness of WWF for relieving fatigue with cancer patients undergoing chemotherapy. Materials and Methods: This is a single-center, prospective, quasi-experimental design study in the oncology ward in Taiwan. Participants in this study were assigned to WWF group as experimental group and standard care group as a control group by purposive sampling. In the WWF group, the participants were asked to soak their feet in 42-43℃ water 15 minutes for consecutive 6 days at one day before chemotherapy. Each participant was evaluated for fatigue level by the Taiwanese version of the Brief Fatigue Inventory (BFI-T). BFI-T was completed for consecutive 8 days of the study. The primary outcome was compared the BFI-T score of WWF group to the standard care group. Results: There were 60 participants enrolled in this study. Thirty participants were assigned to WWF group and 30 participants were assigned to standard care group. Both groups have comparable characteristic. The BFI-T scores of both groups were increased associated with the days of chemotherapy. The highest BFI-T scores of both groups were on the day 4 of chemotherapy. The BFI-T scores of both groups were decreased since day 5 and significantly decreased in WWF group on day 5 compared to standard care group (4.17 vs. 5.7, P < .05). At the end of the study the fatigue at its worse were significantly decreased in WWF group (2.33 vs. 4.37, P < .001). There was no adverse event reported in this study. Conclusion: WWF is an easy, safe, non-invasive, and relatively inexpensive nursing intervention for improving fatigue of cancer patients undergoing chemotherapy. In summary, this study shows the WWF is a simple complementary care method, and it is effective for improving and relieving fatigue in a short time. Through improving fatigue is a way to enhance the quality of life which is important for cancer patients undergoing chemotherapy. Larger prospective randomized controlled trial and long-term effectiveness and outcomes of WWF should be performed to confirm this study.

Keywords: chemotherapy, warm-water footbath, fatigue, Taiwanese version of the brief fatigue inventory

Procedia PDF Downloads 133
5691 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh

Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin

Abstract:

In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.

Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model

Procedia PDF Downloads 138
5690 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 236
5689 The Determination of Operating Reserve in Small Power Systems Based on Reliability Criteria

Authors: H. Falsafi Falsafizadeh, R. Zeinali Zeinali

Abstract:

This paper focuses on determination of total Operating Reserve (OR) level, consisting of spinning and non-spinning reserves, in two small real power systems, in such a way that the system reliability indicator would comply with typical industry standards. For this purpose, the standard used by the North American Electric Reliability Corporation (NERC) – i.e., 1 day outage in 10 years or 0.1 days/year is relied. The simulation of system operation for these systems that was used for the determination of total operating reserve level was performed by industry standard production simulation software in this field, named PLEXOS. In this paper, the operating reserve which meets an annual Loss of Load Expectation (LOLE) of approximately 0.1 days per year is determined in the study year. This reserve is the minimum amount of reserve required in a power system and generally defined as a percentage of the annual peak.

Keywords: frequency control, LOLE, operating reserve, system reliability

Procedia PDF Downloads 329
5688 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 248
5687 Correlation between Fetal Umbilical Cord pH and the Day, the Time and the Team Hand over Times: An Analysis of 6929 Deliveries of the Ulm University Hospital

Authors: Sabine Pau, Sophia Volz, Emanuel Bauer, Amelie De Gregorio, Frank Reister, Wolfgang Janni, Florian Ebner

Abstract:

Purpose: The umbilical cord pH is a well evaluated contributor for prediction of neonatal outcome. This study correlates nenonatal umbilical cord pH with the weekday of delivery, the time of birth as well as the staff hand over times (midwifes and doctors). Material and Methods: This retrospective study included all deliveries of a 20 year period (1994-2014) at our primary obstetric center. All deliveries with a newborn cord pH under 7,20 were included in this analysis (6929 of 48974 deliveries (14,4%)). Further subgroups were formed according to the pH (< 7,05; 7,05 – 7,09; 7,10 – 7,14; 7,15 – 7,19). The data were then separated in day- and night time (8am-8pm/8pm-8am) for a first analysis. Finally, handover times were defined at 6 am – 6.30 am, 2 pm -2.30 pm, 10 pm- 10.30 pm (midwives) and for the doctors 8-8.30 am, 4 – 4.30 pm (Monday- Thursday); 2 pm -2.30 pm (Friday) and 9 am – 9.30 am (weekend). Routinely a shift consists of at least three doctors as well as three midwives. Results: During the last 20 years, 6929 neonates were born with an umbilical cord ph < 7,20 ( < 7,05 : 7,1%; 7,05 – 7,09 : 10,9%; 7,10 – 7,14 : 30,2%; 7,15 – 7,19:51,8%). There was no significant difference between either night/day delivery (p = 0.408), delivery on different weekdays (p = 0.253), delivery between Monday to Thursday, Friday and the weekend (p = 0.496 ) or delivery during the handover times of the doctors as well as the midwives (p = 0.221). Even the standard deviation showed no differences between the groups. Conclusion: Despite an increased workload over the last 20 years, the standard of care remains high even during the handover times and night shifts. This applies for midwives and doctors. As the neonatal outcome depends on various factors, further studies are necessary to take more factors influencing the fetal outcome into consideration. In order to maintain this high standard of care, an adaption of work-load and changing conditions is necessary.

Keywords: delivery, fetal umbilical cord pH, day time, hand over times

Procedia PDF Downloads 301
5686 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 56
5685 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 67
5684 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions

Authors: Jose Juan Peña, J. Morales, J. García-Ravelo

Abstract:

In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.

Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials

Procedia PDF Downloads 172
5683 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 137
5682 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 161
5681 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee

Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado

Abstract:

Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.

Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses

Procedia PDF Downloads 21
5680 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 54
5679 Consequences of Transformation of Modern Monetary Policy during the Global Financial Crisis

Authors: Aleksandra Szunke

Abstract:

Monetary policy is an important pillar of the economy, directly affecting on the condition of banking sector. Depending on the strategy may both support functioning of banking institutions, as well as limit their excessively risky activities. The literature studies indicate a large number of publications, which include characteristics of initiatives, implemented by central banks during the global financial crisis and the potential effects of the use of non-standard monetary policy instruments. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. Even before the escalation of instability, Bernanke, Reinhart, and Sack (2004) analyzed the effectiveness of various unconventional monetary tools in lowering long-term interest rates in the United States and Japan. The obtained results largely confirmed the effectiveness of the zero-interest-rate policy and Quantitative Easing (QE) in achieving the goal of reducing long-term interest rates. Japan, considered as the precursor of QE policy, also conducted research about the consequences of non-standard instruments, implemented to restore financial stability of the country. Although the literature about the effectiveness of Quantitative Easing in Japan is extensive, it does not uniquely specify whether it brought permanent effects. The main aim of the study is to identify the implications of non-standard monetary policy, implemented by selected central banks (the Federal Reserve System, Bank of England and European Central Bank), paying particular attention to the consequences into three areas: the size of money supply, financial markets, and the real economy.

Keywords: consequences of modern monetary policy, quantitative easing policy, banking sector instability, global financial crisis

Procedia PDF Downloads 463
5678 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 329
5677 An Investigation of the Use of Visible Spectrophotometric Analysis of Lead in an Herbal Tea Supplement

Authors: Salve Alessandria Alcantara, John Armand E. Aquino, Ma. Veronica Aranda, Nikki Francine Balde, Angeli Therese F. Cruz, Elise Danielle Garcia, Antonie Kyna Lim, Divina Gracia Lucero, Nikolai Thadeus Mappatao, Maylan N. Ocat, Jamille Dyanne L. Pajarillo, Jane Mierial A. Pesigan, Grace Kristin Viva, Jasmine Arielle C. Yap, Kathleen Michelle T. Yu, Joanna J. Orejola, Joanna V. Toralba

Abstract:

Lead is a neurotoxic metallic element that is slowly accumulated in bones and tissues especially if present in products taken in a regular basis such as herbal tea supplements. Although sensitive analytical instruments are already available, the USP limit test for lead is still widely used. However, because of its serious shortcomings, Lang Lang and his colleagues developed a spectrophotometric method for determination of lead in all types of samples. This method was the one adapted in this study. The actual procedure performed was divided into three parts: digestion, extraction and analysis. For digestion, HNO3 and CH3COOH were used. Afterwards, masking agents, 0.003% and 0.001% dithizone in CHCl3 were added and used for the extraction. For the analysis, standard addition method and colorimetry were performed. This was done in triplicates under two conditions. The 1st condition, using 25µg/mL of standard, resulted to very low absorbances with an r2 of 0.551. This led to the use of a higher concentration, 1mg/mL, for condition 2. Precipitation of lead cyanide was observed and the absorbance readings were relatively higher but between 0.15-0.25, resulting to a very low r2 of 0.429. LOQ and LOD were not computed due to the limitations of the Milton-Roy Spectrophotometer. The method performed has a shorter digestion time, and used less but more accessible reagents. However, the optimum ratio of dithizone-lead complex must be observed in order to obtain reliable results while exploring other concentration of standards.

Keywords: herbal tea supplement, lead-dithizone complex, standard addition, visible spectroscopy

Procedia PDF Downloads 374
5676 Markov Switching of Conditional Variance

Authors: Josip Arneric, Blanka Skrabic Peric

Abstract:

Forecasting of volatility, i.e. returns fluctuations, has been a topic of interest to portfolio managers, option traders and market makers in order to get higher profits or less risky positions. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most common used models are GARCH type models. As standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance, it is difficult the predict volatility using standard GARCH models. Due to practical limitations of these models different approaches have been proposed in the literature, based on Markov switching models. In such situations models in which the parameters are allowed to change over time are more appropriate because they allow some part of the model to depend on the state of the economy. The empirical analysis demonstrates that Markov switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility for selected emerging markets.

Keywords: emerging markets, Markov switching, GARCH model, transition probabilities

Procedia PDF Downloads 446
5675 Different Cognitive Processes in Selecting Spatial Demonstratives: A Cross-Linguistic Experimental Survey

Authors: Yusuke Sugaya

Abstract:

Our research conducts a cross-linguistic experimental investigation into the cognitive processes involved in distance judgment necessary for selecting demonstratives in deictic usage. Speakers may consider the addressee's judgment or apply certain criteria for distance judgment when they produce demonstratives. While it can be assumed that there are language and cultural differences, it remains unclear how these differences manifest across languages. This research conducted online experiments involving speakers of six languages—Japanese, Spanish, Irish, English, Italian, and French—in which a wide variety of drawings were presented on a screen, varying conditions from three perspectives: addressee, comparisons, and standard. The results of the experiments revealed various distinct features associated with demonstratives in each language, highlighting differences from a comparative standpoint. For one thing, there was an influence of a specific reference point (i.e., Standard) on the selection in Japanese and Spanish, whereas there was relatively an influence of competitors in English and Italian.

Keywords: demonstratives, cross-linguistic experiment, distance judgment, social cognition

Procedia PDF Downloads 29
5674 Design of Data Management Software System Supporting Rendezvous and Docking with Various Spaceships

Authors: Zhan Panpan, Lu Lan, Sun Yong, He Xiongwen, Yan Dong, Gu Ming

Abstract:

The function of the two spacecraft docking network, the communication and control of a docking target with various spacecrafts is realized in the space lab data management system. In order to solve the problem of the complex data communication mode between the space lab and various spaceships, and the problem of software reuse caused by non-standard protocol, a data management software system supporting rendezvous and docking with various spaceships has been designed. The software system is based on CCSDS Spcecraft Onboard Interface Service(SOIS). It consists of Software Driver Layer, Middleware Layer and Appliaction Layer. The Software Driver Layer hides the various device interfaces using the uniform device driver framework. The Middleware Layer is divided into three lays, including transfer layer, application support layer and system business layer. The communication of space lab plaform bus and the docking bus is realized in transfer layer. Application support layer provides the inter tasks communitaion and the function of unified time management for the software system. The data management software functions are realized in system business layer, which contains telemetry management service, telecontrol management service, flight status management service, rendezvous and docking management service and so on. The Appliaction Layer accomplishes the space lab data management system defined tasks using the standard interface supplied by the Middleware Layer. On the basis of layered architecture, rendezvous and docking tasks and the rendezvous and docking management service are independent in the software system. The rendezvous and docking tasks will be activated and executed according to the different spaceships. In this way, the communication management functions in the independent flight mode, the combination mode of the manned spaceship and the combination mode of the cargo spaceship are achieved separately. The software architecture designed standard appliction interface for the services in each layer. Different requirements of the space lab can be supported by the use of standard services per layer, and the scalability and flexibility of the data management software can be effectively improved. It can also dynamically expand the number and adapt to the protocol of visiting spaceships. The software system has been applied in the data management subsystem of the space lab, and has been verified in the flight of the space lab. The research results of this paper can provide the basis for the design of the data manage system in the future space station.

Keywords: space lab, rendezvous and docking, data management, software system

Procedia PDF Downloads 358
5673 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 90
5672 Appropriate Depth of Needle Insertion during Rhomboid Major Trigger Point Block

Authors: Seongho Jang

Abstract:

Objective: To investigate an appropriate depth of needle insertion during trigger point injection into the rhomboid major muscle. Methods: Sixty-two patients who visited our department with shoulder or upper back pain participated in this study. The distance between the skin and the rhomboid major muscle (SM) and the distance between the skin and rib (SB) were measured using ultrasonography. The subjects were divided into 3 groups according to BMI: BMI less than 23 kg/m2 (underweight or normal group); 23 kg/m2 or more to less than 25 kg/m2 (overweight group); and 25 kg/m2 or more (obese group). The mean ±standard deviation (SD) of SM and SB of each group were calculated. A range between mean+1 SD of SM and the mean-1 SD of SB was defined as a safe margin. Results: The underweight or normal group’s SM, SB, and the safe margin were 1.2±0.2, 2.1±0.4, and 1.4 to 1.7 cm, respectively. The overweight group’s SM and SB were 1.4±0.2 and 2.4±0.9 cm, respectively. The safe margin could not be calculated for this group. The obese group’s SM, SB, and the safe margin were 1.8±0.3, 2.7±0.5, and 2.1 to 2.2 cm, respectively. Conclusion: This study will help us to set the standard depth of safe needle insertion into the rhomboid major muscle in an effective manner without causing any complications.

Keywords: pneumothorax, rhomboid major muscle, trigger point injection, ultrasound

Procedia PDF Downloads 281
5671 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 240
5670 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study

Authors: Ajit Brindhaban

Abstract:

Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.

Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging

Procedia PDF Downloads 134
5669 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 128
5668 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 423
5667 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 405