Search results for: continuous data
26412 Continuous Wave Interference Effects on Global Position System Signal Quality
Authors: Fang Ye, Han Yu, Yibing Li
Abstract:
Radio interference is one of the major concerns in using the global positioning system (GPS) for civilian and military applications. Interference signals are produced not only through all electronic systems but also illegal jammers. Among different types of interferences, continuous wave (CW) interference has strong adverse impacts on the quality of the received signal. In this paper, we make more detailed analysis for CW interference effects on GPS signal quality. Based on the C/A code spectrum lines, the influence of CW interference on the acquisition performance of GPS receivers is further analysed. This influence is supported by simulation results using GPS software receiver. As the most important user parameter of GPS receivers, the mathematical expression of bit error probability is also derived in the presence of CW interference, and the expression is consistent with the Monte Carlo simulation results. The research on CW interference provides some theoretical gist and new thoughts on monitoring the radio noise environment and improving the anti-jamming ability of GPS receivers.Keywords: GPS, CW interference, acquisition performance, bit error probability, Monte Carlo
Procedia PDF Downloads 25926411 Durrmeyer Type Modification of q-Generalized Bernstein Operators
Authors: Ruchi, A. M. Acu, Purshottam N. Agrawal
Abstract:
The purpose of this paper to introduce the Durrmeyer type modification of q-generalized-Bernstein operators which include the Bernstein polynomials in the particular α = 0. We investigate the rate of convergence by means of the Lipschitz class and the Peetre’s K-functional. Also, we define the bivariate case of Durrmeyer type modification of q-generalized-Bernstein operators and study the degree of approximation with the aid of the partial modulus of continuity and the Peetre’s K-functional. Finally, we introduce the GBS (Generalized Boolean Sum) of the Durrmeyer type modification of q- generalized-Bernstein operators and investigate the approximation of the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.Keywords: Bögel continuous, Bögel differentiable, generalized Boolean sum, Peetre’s K-functional, Lipschitz class, mixed modulus of smoothness
Procedia PDF Downloads 21326410 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing
Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais
Abstract:
Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query
Procedia PDF Downloads 20126409 Investigation of Chip Formation Characteristics during Surface Finishing of HDPE Samples
Authors: M. S. Kaiser, S. Reaz Ahmed
Abstract:
Chip formation characteristics are investigated during surface finishing of high density polyethylene (HDPE) samples using a shaper machine. Both the cutting speed and depth of cut are varied continually to enable observations under various machining conditions. The generated chips are analyzed in terms of their shape, size, and deformation. Their physical appearances are also observed using digital camera and optical microscope. The investigation shows that continuous chips are obtained for all the cutting conditions. It is observed that cutting speed is more influential than depth of cut to cause dimensional changes of chips. Chips curl radius is also found to increase gradually with the increase of cutting speed. The length of continuous chips remains always smaller than the job length, and the corresponding discrepancies are found to be more prominent at lower cutting speed. Microstructures of the chips reveal that cracks are formed at higher cutting speeds and depth of cuts, which is not that significant at low depth of cut.Keywords: HDPE, surface-finishing, chip formation, deformation, roughness
Procedia PDF Downloads 14526408 Establishing Control Chart Limits for Rounded Measurements
Authors: Ran Etgar
Abstract:
The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X̄ chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter ȳ is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.Keywords: SPC, round-off data, control limit, rounding error
Procedia PDF Downloads 7526407 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 14326406 Continuous Dyeing of Graphene and Polyaniline on Textiles for Electromagnetic Interference Shielding: An Application of Intelligent Fabrics
Authors: Mourad Makhlouf, Meriem Boutamine, Hachemi Hichem, Zoubir Benmaamar, Didier Villemin
Abstract:
This study explores the use of intelligent textiles for electromagnetic shielding through the continuous dyeing of graphene and polyaniline onto cotton fabric. Graphene was obtained by recycling graphite from spent batteries, and polyaniline was obtained in situ using H2O2. Graphene and polyaniline were bottom-modified on the fiber surface to improve adhesion and achieve a uniform distribution. This study evaluated the effect of the specific gravity percentage on sheet performance and active shielding against electromagnetic interference (EMI). Results showed that the dyed fabrics of graphene, polyaniline, and graphene/polyaniline demonstrated higher conductivity and EMI SE values of 9 to 16 dB in the 8 to 9 GHz range of the X-band, with potential applications in electromagnetic shielding. The use of intelligent textiles offers a sustainable and effective approach to achieving EMI shielding, with the added benefits of recycling waste materials and improving the properties of cotton fabrics.Keywords: 'ntelligent textiles, graphene, polyaniline, electromagnetic shielding, conductivity, recycling.
Procedia PDF Downloads 3826405 Evaluation of the Effectiveness of a HAWK Signal on Compliance in Las Vegas Nevada
Authors: A. Paz, M. Khadka, N. Veeramisti, B. Morris
Abstract:
There is a continuous large number of crashes involving pedestrians in Nevada despite the numerous safety mechanisms currently used at roadway crossings. Hence, additional as well as more effective mechanisms are required to reduce crashes in Las Vegas, in particular, and Nevada in general. A potential mechanism to reduce conflicts between pedestrians and vehicles is a High-intensity Activated crossWalK (HAWK) signal. This study evaluates the effects of such signals at a particular site in Las Vegas. Video data were collected using two cameras, facing the eastbound and westbound traffic. One week of video data before and after the deployment of the signal were collected to capture the behavior of both pedestrians and drivers. T-test analyses of pedestrian waiting time at the curb, curb-to-curb crossing time, total crossing time, jaywalking events, and near-crash events show that the HAWK system provides significant benefits.Keywords: pedestrian crashes, HAWK signal, traffic safety, pedestrian danger index
Procedia PDF Downloads 34126404 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error
Authors: R. Sabre, W. Horrigue, J. C. Simon
Abstract:
This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.Keywords: spectral density, stable processes, aliasing, periodogram
Procedia PDF Downloads 13726403 Comparison of Adsorbents for Ammonia Removal from Mining Wastewater
Authors: F. Al-Sheikh, C. Moralejo, M. Pritzker, W. A. Anderson, A. Elkamel
Abstract:
Ammonia in mining wastewater is a significant problem, and treatment can be especially difficult in cold climates where biological treatment is not feasible. An adsorption process is one of the alternative processes that can be used to reduce ammonia concentrations to acceptable limits, and therefore a LEWATIT resin strongly acidic H+ form ion exchange resin and a Bowie Chabazite Na form AZLB-Na zeolite were tested to assess their effectiveness. For these adsorption tests, two packed bed columns (a mini-column constructed from a 32-cm long x 1-cm diameter piece of glass tubing, and a 60-cm long x 2.5-cm diameter Ace Glass chromatography column) were used containing varying quantities of the adsorbents. A mining wastewater with ammonia concentrations of 22.7 mg/L was fed through the columns at controlled flowrates. In the experimental work, maximum capacities of the LEWATIT ion exchange resin were 0.438, 0.448, and 1.472 mg/g for 3, 6, and 9 g respectively in a mini column and 1.739 mg/g for 141.5 g in a larger Ace column while the capacities for the AZLB-Na zeolite were 0.424, and 0.784 mg/g for 3, and 6 g respectively in the mini column and 1.1636 mg/g for 38.5 g in the Ace column. In the theoretical work, Thomas, Adams-Bohart, and Yoon-Nelson models were constructed to describe a breakthrough curve of the adsorption process and find the constants of the above-mentioned models. In the regeneration tests, 5% hydrochloric acid, HCl (v/v) and 10% sodium hydroxide, NaOH (w/v) were used to regenerate the LEWATIT resin and AZLB-Na zeolite with 44 and 63.8% recovery, respectively. In conclusion, continuous flow adsorption using a LEWATIT ion exchange resin and an AZLB-Na zeolite is efficient when using a co-flow technique for removal of the ammonia from wastewater. Thomas, Adams-Bohart, and Yoon-Nelson models satisfactorily fit the data with R2 closer to 1 in all cases.Keywords: AZLB-Na zeolite, continuous adsorption, Lewatit resin, models, regeneration
Procedia PDF Downloads 38826402 Emerging Threats and Adaptive Defenses: Navigating the Future of Cybersecurity in a Hyperconnected World
Authors: Olasunkanmi Jame Ayodeji, Adebayo Adeyinka Victor
Abstract:
In a hyperconnected world, cybersecurity faces a continuous evolution of threats that challenge traditional defence mechanisms. This paper explores emerging cybersecurity threats like malware, ransomware, phishing, social engineering, and the Internet of Things (IoT) vulnerabilities. It delves into the inadequacies of existing cybersecurity defences in addressing these evolving risks and advocates for adaptive defence mechanisms that leverage AI, machine learning, and zero-trust architectures. The paper proposes collaborative approaches, including public-private partnerships and information sharing, as essential to building a robust defence strategy to address future cyber threats. The need for continuous monitoring, real-time incident response, and adaptive resilience strategies is highlighted to fortify digital infrastructures in the face of escalating global cyber risks.Keywords: cybersecurity, hyperconnectivity, malware, adaptive defences, zero-trust architecture, internet of things vulnerabilities
Procedia PDF Downloads 1926401 Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation
Authors: Muhammad Zubair Khan, Yugyung Lee
Abstract:
Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques.Keywords: deep learning, semantic segmentation, image analysis, pixels connection, convolution neural network
Procedia PDF Downloads 10226400 Linear Dynamic Stability Analysis of a Continuous Rotor-Disk-Blades System
Authors: F. Rahimi Dehgolan, S. E. Khadem, S. Bab, M. Najafee
Abstract:
Nowadays, using rotating systems like shafts and disks in industrial machines have been increased constantly. Dynamic stability is one of the most important factors in designing rotating systems. In this study, linear frequencies and stability of a coupled continuous flexible rotor-disk-blades system are studied. The Euler-Bernoulli beam theory is utilized to model the blade and shaft. The equations of motion are extracted using the extended Hamilton principle. The equations of motion have been simplified using the Coleman and complex transformations method. The natural frequencies of the linear part of the system are extracted, and the effects of various system parameters on the natural frequencies and decay rates (stability condition) are clarified. It can be seen that the centrifugal stiffening effect applied to the blades is the most important parameter for stability of the considered rotating system. This result highlights the importance of considering this stiffing effect in blades equation.Keywords: rotating shaft, flexible blades, centrifugal stiffness, stability
Procedia PDF Downloads 26426399 Importance of Continuous Professional Development for Teacher Educators in Myanmar Education College
Authors: Moet Moet Myint Lay
Abstract:
Continuing professional development involves acquiring new knowledge and skills for current work and improving career opportunities in the field through continuing education (OECD, 2000). This article examines the effectiveness of CPD in improving teacher quality and the resulting need for CPD for teacher educators in Myanmar. The purpose of this study is to explore a deeper understanding of teacher-to-teacher continuing professional development in improving teacher education programs. Research questions: (1) How do teachers in Myanmar understand the idea of continuous professional development for professional development? (2) What CPD activities are required for all teachers in teachers' colleges? (3) What are the main challenges of CPD implementation in Myanmar Education College? A qualitative method using semi-structured interviews was used in this study. Seven teacher educators from Mandalay Education College participated in this study. There are three male teacher educators and four female teacher educators. All participants who responded to the semi-structured interviews were between 29 and 45 years old.The interviews revealed that professional development involves acquiring the necessary pedagogical knowledge and skills to encourage students to think creatively and critically. Teachers must participate in a variety of activities, including professional interviews, lesson study, training programs, workshops, and seminars. All results showed that teachers need English and ICT skills for teaching and learning, including extended ICT courses for those who have completed a foundation course, access to e-libraries, and inclusive education (including language teaching and learning), facilitate the assessment (formative and summative), practicum, mentoring, and coaching skills. The study concludes with practical findings that suggest an urgent need for CPD activities for teachers.Keywords: continuous professional development, teacher educator, teacher training program), mentoring
Procedia PDF Downloads 5726398 IoT and Advanced Analytics Integration in Biogas Modelling
Authors: Rakesh Choudhary, Ajay Kumar, Deepak Sharma
Abstract:
The main goal of this paper is to investigate the challenges and benefits of IoT integration in biogas production. This overview explains how the inclusion of IoT can enhance biogas production efficiency. Therefore, such collected data can be explored by advanced analytics, including Artificial intelligence (AI) and Machine Learning (ML) algorithms, consequently improving bio-energy processes. To boost biogas generation efficiency, this report examines the use of IoT devices for real-time data collection on key parameters, e.g., pH, temperature, gas composition, and microbial growth. Real-time monitoring through big data has made it possible to detect diverse, complex trends in the process of producing biogas. The Informed by advanced analytics can also help in improving bio-energy production as well as optimizing operational conditions. Moreover, IoT allows remote observation, control and management, which decreases manual intervention needed whilst increasing process effectiveness. Such a paradigm shift in the incorporation of IoT technologies into biogas production systems helps to achieve higher productivity levels as well as more practical biomass quality biomethane through real-time monitoring-based proactive decision-making, thus driving continuous performance improvement.Keywords: internet of things, biogas, renewable energy, sustainability, anaerobic digestion, real-time monitoring, optimization
Procedia PDF Downloads 1926397 Analysis of Big Data
Authors: Sandeep Sharma, Sarabjit Singh
Abstract:
As per the user demand and growth trends of large free data the storage solutions are now becoming more challenge-able to protect, store and to retrieve data. The days are not so far when the storage companies and organizations are start saying 'no' to store our valuable data or they will start charging a huge amount for its storage and protection. On the other hand as per the environmental conditions it becomes challenge-able to maintain and establish new data warehouses and data centers to protect global warming threats. A challenge of small data is over now, the challenges are big that how to manage the exponential growth of data. In this paper we have analyzed the growth trend of big data and its future implications. We have also focused on the impact of the unstructured data on various concerns and we have also suggested some possible remedies to streamline big data.Keywords: big data, unstructured data, volume, variety, velocity
Procedia PDF Downloads 54726396 Multi-Criteria Inventory Classification Process Based on Logical Analysis of Data
Authors: Diana López-Soto, Soumaya Yacout, Francisco Ángel-Bello
Abstract:
Although inventories are considered as stocks of money sitting on shelve, they are needed in order to secure a constant and continuous production. Therefore, companies need to have control over the amount of inventory in order to find the balance between excessive and shortage of inventory. The classification of items according to certain criteria such as the price, the usage rate and the lead time before arrival allows any company to concentrate its investment in inventory according to certain ranking or priority of items. This makes the decision making process for inventory management easier and more justifiable. The purpose of this paper is to present a new approach for the classification of new items based on the already existing criteria. This approach is called the Logical Analysis of Data (LAD). It is used in this paper to assist the process of ABC items classification based on multiple criteria. LAD is a data mining technique based on Boolean theory that is used for pattern recognition. This technique has been tested in medicine, industry, credit risk analysis, and engineering with remarkable results. An application on ABC inventory classification is presented for the first time, and the results are compared with those obtained when using the well-known AHP technique and the ANN technique. The results show that LAD presented very good classification accuracy.Keywords: ABC multi-criteria inventory classification, inventory management, multi-class LAD model, multi-criteria classification
Procedia PDF Downloads 88026395 Termite Brick Temperature and Relative Humidity by Continuous Monitoring Technique
Authors: Khalid Abdullah Alshuhail, Syrif Junidi, Ideisan Abu-Abdoum, Abdulsalam Aldawoud
Abstract:
For the intention of reducing energy consumption, a proposed construction brick was made of imitation termite mound soil referred here as termite brick (TB). To calculate the thermal performance, a real case model was constructed by using this biomimetic brick for testing purposes. This paper aims at investigating the thermal performance of this brick during different climatic months. Its thermal behaviour was thoroughly studied over the course of four months by using continuous method (CMm). The main parameters were focused on temperature and relative humidity. It was found that the TB does not perform similarly in all four months and/or in all orientations. Each four-month model study was deeply analyzed. By using the CMm method, the model was also examined. The measuring period shows generally that internal temperature and internal humidity are higher in the roof within 2 degrees and lowest at north wall orientation. The relative humidity was also investigated systematically. The paper reveals more interesting findings.Keywords: building material, continious monitoring, orientation, wall, temprature
Procedia PDF Downloads 12326394 Disaggregation of Coarser Resolution Radiometer Derived Soil Moisture to Finer Scales
Authors: Gurjeet Singh, Rabindra K. Panda
Abstract:
Soil moisture is a key hydrologic state variable and is intrinsically linked to the Earth's water, climate and carbon cycles. On ecological point of view, the soil moisture is a fundamental natural resource providing the transpirable water for plants. Soil moisture varies both temporally and spatially due to spatiotemporal variation in rainfall, vegetation cover, soil properties and topography. Satellite derived soil moisture provides spatio-temporal extensive data. However, the spatial resolution of a typical satellite (L-band radiometry) is of the order of tens of kilometers, which is not good enough for developing efficient agricultural water management schemes at the field scale. In the present study, the soil moisture from radiometer data has been disaggregated using blending approach to achieve higher resolution soil moisture data. The radiometer estimates of soil moisture at a 40 km resolution have been disaggregated to 10 km, 5 km and 1 km resolutions. The disaggregated soil moisture was compared with the observed data, consisting of continuous sensor based soil moisture profile measurements, at three monitoring sites and extensive spatial near-surface soil moisture measurements, concurrent with satellite monitoring in the 500 km2 study watershed in the Eastern India. The estimated soil moisture status at different spatial scales can help in developing efficient agricultural water management schemes to increase the crop production and water use efficiency.Keywords: disaggregation, eastern India, radiometers, soil moisture, water use efficiency
Procedia PDF Downloads 27526393 Assessing Renewal Needs of Urban Water Infrastructure Systems: Case Study of Linköping in Sweden
Authors: Eman Hegazy, Stefan Anderberg, Joakim Krook
Abstract:
Urban water infrastructure systems are central to functioning cities. For securing a continuous and efficient supply of the systems services, continuous investment, maintenance, and renewal are needed. Neglecting maintenance and renewal can lead to recurrent breakdown problems as systems age, which makes it more and more difficult to secure efficient long-term supply. Globally, many cities struggle with aging water infrastructure, often due to competing funding priorities. Investment in maintenance and renewal is not prioritized. The problem primarily stems from the challenge of reaping the benefits of investments promptly. The long-term benefits gained from investing in the renewal of water infrastructure may be achievable in the long run, resulting in the oversight of such investments. This leads to a build-up of "renewal debt" for future generations to inherit. Addressing this issue is difficult due to various contributing factors and the complex nature of the systems. The study aims to contribute to an increased understanding of the long-term management challenges of urban water infrastructure, the development of improved maintenance and renewal strategies through the examination of water infrastructure management, and the assessment of the adequacy of the maintenance and renewal in a case study, the city of Linköping, Sweden. Employing a multi-methods approach, this study utilized both qualitative and quantitative methods, including interviews, workshops, and data analysis. The findings of the study provided insights into the current status of the water and sewerage networks in Linkoping, highlighting the risks to ensuring reliable and sustainable water supply and discussing strategies for improving maintenance and renewal.Keywords: case study, infrastructure management, renewal needs, Sweden, urban water infrastructure
Procedia PDF Downloads 6726392 Linear Frequency Modulation-Frequency Shift Keying Radar with Compressive Sensing
Authors: Ho Jeong Jin, Chang Won Seo, Choon Sik Cho, Bong Yong Choi, Kwang Kyun Na, Sang Rok Lee
Abstract:
In this paper, a radar signal processing technique using the LFM-FSK (Linear Frequency Modulation-Frequency Shift Keying) is proposed for reducing the false alarm rate based on the compressive sensing. The LFM-FSK method combines FMCW (Frequency Modulation Continuous Wave) signal with FSK (Frequency Shift Keying). This shows an advantage which can suppress the ghost phenomenon without the complicated CFAR (Constant False Alarm Rate) algorithm. Moreover, the parametric sparse algorithm applying the compressive sensing that restores signals efficiently with respect to the incomplete data samples is also integrated, leading to reducing the burden of ADC in the receiver of radars. 24 GHz FMCW signal is applied and tested in the real environment with FSK modulated data for verifying the proposed algorithm along with the compressive sensing.Keywords: compressive sensing, LFM-FSK radar, radar signal processing, sparse algorithm
Procedia PDF Downloads 47926391 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals
Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou
Abstract:
In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life
Procedia PDF Downloads 13226390 Study the effect of bulk traps on Solar Blind Photodetector Based on an IZTO/β Ga2O3/ITO Schottky Diode
Authors: Laboratory of Semiconducting, Metallic Materials (LMSM) Biskra Algeria
Abstract:
InZnSnO2 (IZTO)/β-Ga2O3 Schottky solar barrier photodetector (PhD) exposed to 255 nm was simulated and compared to the measurement. Numerical simulations successfully reproduced the photocurrent at reverse bias and response by taking into account several factors, such as conduction mechanisms and material parameters. By adopting reducing the density of the trap as an improvement. The effect of reducing the bulk trap densities on the photocurrent, response, and time-dependent (continuous conductivity) was studied. As the trap density decreased, the photocurrent increased. The response was 0.04 A/W for the low Ga2O3 trap density. The estimated decay time for the lowest intensity ET (0.74, 1.04 eV) is 0.05 s and is shorter at ∼0.015 s for ET (0.55 eV). This indicates that the shallow traps had the dominant effect (ET = 0.55 eV) on the continuous photoconductivity phenomenon. Furthermore, with decreasing trap densities, this PhD can be considered as a self-powered solar-blind photodiode (SBPhD).Keywords: IZTO/β-Ga2O3, self-powered solar-blind photodetector, numerical simulation, bulk traps
Procedia PDF Downloads 8526389 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin
Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie
Abstract:
Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin
Procedia PDF Downloads 26226388 Characterization of Surface Suction Grippers for Continuous-Discontinuous Fiber Reinforced Semi-Finished Parts of an Automated Handling and Preforming Operation
Authors: Jürgen Fleischer, Woramon Pangboonyanon, Dominic Lesage
Abstract:
Non-metallic lightweight materials such as fiber reinforced plastics (FRP) become very significant at present. Prepregs e.g. SMC and unidirectional tape (UD-tape) are one of raw materials used to produce FRP. This study concerns with the manufacturing steps of handling and preforming of this UD-SMC and focuses on the investigation of gripper characteristics regarding gripping forces in normal and lateral direction, in order to identify suitable operating pressures for a secure gripping operation. A reliable handling and preforming operation results in a higher adding value of the overall process chain. As a result, the suitable operating pressures depending on travelling direction for each material type could be shown. Moreover, system boundary conditions regarding allowable pulling force in normal and lateral directions during preforming could be measured.Keywords: continuous-discontinuous fiber reinforced plastics, UD-SMC-prepreg, handling, preforming, prepregs, sheet moulding compounds, surface suction gripper
Procedia PDF Downloads 22226387 Teacher Trainers’ Motivation in Transformation of Teaching and Learning: The Fun Way Approach
Authors: Malathi Balakrishnan, Gananthan M. Nadarajah, Noraini Abd Rahim, Amy Wong On Mei
Abstract:
The purpose of the study is to investigate the level of intrinsic motivation of trainers after attending a Continuous Professional Development Course (CPD) organized by Institute of Teacher Training Malaysia titled, ‘Transformation of Teaching and Learning the Fun Way’. This study employed a survey whereby 96 teacher trainers were given Situational Intrinsic Motivational Scale (SIMS) Instruments. Confirmatory factor analysis was carried out to get validity of this instrument in local setting. Data were analyzed with SPSS for descriptive statistic. Semi structured interviews were also administrated to collect qualitative data on participants experiences after participating in the two-day fun-filled program. The findings showed that the participants’ level of intrinsic motivation showed higher mean than the amotivation. The results revealed that the intrinsic motivation mean is 19.0 followed by Identified regulation with a mean of 17.4, external regulation 9.7 and amotivation 6.9. The interview data also revealed that the participants were motivated after attending this training program. It can be concluded that this program, which was organized by Institute of Teacher Training Malaysia, was able to enhance participants’ level of motivation. Self-Determination Theory (SDT) as a multidimensional approach to motivation was utilized. Therefore, teacher trainers may have more success using the ‘The fun way approach’ in conducting training program in future.Keywords: teaching and learning, motivation, teacher trainer, SDT
Procedia PDF Downloads 46026386 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example
Authors: Wang Yang
Abstract:
Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map
Procedia PDF Downloads 10326385 Bridge Health Monitoring: A Review
Authors: Mohammad Bakhshandeh
Abstract:
Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis
Procedia PDF Downloads 8826384 Research of Data Cleaning Methods Based on Dependency Rules
Authors: Yang Bao, Shi Wei Deng, WangQun Lin
Abstract:
This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.Keywords: data cleaning, dependency rules, violation data discovery, data repair
Procedia PDF Downloads 56326383 Physics-Informed Convolutional Neural Networks for Reservoir Simulation
Authors: Jiangxia Han, Liang Xue, Keda Chen
Abstract:
Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation
Procedia PDF Downloads 142