Search results for: time domain analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40010

Search results for: time domain analysis

39170 Evaluating the Feasibility of Magnetic Induction to Cross an Air-Water Boundary

Authors: Mark Watson, J.-F. Bousquet, Adam Forget

Abstract:

A magnetic induction based underwater communication link is evaluated using an analytical model and a custom Finite-Difference Time-Domain (FDTD) simulation tool. The analytical model is based on the Sommerfeld integral, and a full-wave simulation tool evaluates Maxwell’s equations using the FDTD method in cylindrical coordinates. The analytical model and FDTD simulation tool are then compared and used to predict the system performance for various transmitter depths and optimum frequencies of operation. To this end, the system bandwidth, signal to noise ratio, and the magnitude of the induced voltage are used to estimate the expected channel capacity. The models show that in seawater, a relatively low-power and small coils may be capable of obtaining a throughput of 40 to 300 kbps, for the case where a transmitter is at depths of 1 to 3 m and a receiver is at a height of 1 m.

Keywords: magnetic induction, FDTD, underwater communication, Sommerfeld

Procedia PDF Downloads 112
39169 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 92
39168 Understanding Regional Circulations That Modulate Heavy Precipitations in the Kulfo Watershed

Authors: Tesfay Mekonnen Weldegerima

Abstract:

Analysis of precipitation time series is a fundamental undertaking in meteorology and hydrology. The extreme precipitation scenario of the Kulfo River watershed is studied using wavelet analysis and atmospheric transport, a lagrangian trajectory model. Daily rainfall data for the 1991-2020 study periods are collected from the office of the Ethiopian Meteorology Institute. Meteorological fields on a three-dimensional grid at 0.5o x 0.5o spatial resolution and daily temporal resolution are also obtained from the Global Data Assimilation System (GDAS). Wavelet analysis of the daily precipitation processed with the lag-1 coefficient reveals some high power recurred once every 38 to 60 days with greater than 95% confidence for red noise. The analysis also identified inter-annual periodicity in the periods 2002 - 2005 and 2017 - 2019. Back trajectory analysis for 3-day periods up to May 19/2011, indicates the Indian Ocean source; trajectories crossed the eastern African escarpment to arrive at the Kulfo watershed. Atmospheric flows associated with the Western Indian monsoon redirected by the low-level Somali winds and Arabian ridge are responsible for the moisture supply. The time-localization of the wavelet power spectrum yields valuable hydrological information, and the back trajectory approaches provide useful characterization of air mass source.

Keywords: extreme precipitation events, power spectrum, back trajectory, kulfo watershed

Procedia PDF Downloads 53
39167 Behavior of Steel Moment Frames Subjected to Impact Load

Authors: Hyungoo Kang, Minsung Kim, Jinkoo Kim

Abstract:

This study investigates the performance of a 2D and 3D steel moment frame subjected to vehicle collision at a first story column using LS-DYNA. The finite element models of vehicles provided by the National Crash Analysis Center (NCAC) are used for numerical analysis. Nonlinear dynamic time history analysis of the 2D and 3D model structures are carried out based on the arbitrary column removal scenario, and the vertical displacement of the damaged structures are compared with that obtained from collision analysis. The analysis results show that the model structure remains stable when the speed of the vehicle is 40km/h. However, at the speed of 80 and 120km/h both the 2D and 3D structures collapse by progressive collapse. The vertical displacement of the damaged joint obtained from collision analysis is significantly larger than the displacement computed based on the arbitrary column removal scenario.

Keywords: vehicle collision, progressive collapse, FEM, LS-DYNA

Procedia PDF Downloads 327
39166 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 233
39165 Evaluation of Elements Impurities in Drugs According to Pharmacopoeia by use FESEM-EDS Technique

Authors: Rafid Doulab

Abstract:

Elemental Impurities in the Pharmaceuticals industryis are indispensable to ensure pharmaceuticalssafety for 24 elements. Although atomic absorption and inductively coupled plasma are used in the U.S Pharmacopeia and the European Pharmacopoeia, FESEM with energy dispersive spectrometers can be applied as an alternative analysis method for quantitative and qualitative results for a variety of elements without chemical pretreatment, unlike other techniques. This technique characterizes by shortest time, with more less contamination, no reagent consumption, and generation of minimal residue or waste, as well as sample preparations time limiting, with minimal analysis error. Simple dilution for powder or direct analysis for liquid, we analyzed the usefulness of EDS method in testing with field emission scanning electron microscopy (FESEM, SUPRA 55 Carl Zeiss Germany) with an X-ray energy dispersion (XFlash6l10 Bruker Germany). The samples analyzed directly without coating by applied 5µ of known concentrated diluted sample on carbon stub with accelerated voltage according to sample thickness, the result for this spot was in atomic percentage, and by Avogadro converted factor, the final result will be in microgram. Conclusion and recommendation: The conclusion of this study is application of FESEM-EDS in US pharmacopeia and ICH /Q3D guideline to reach a high-precision and accurate method in element impurities analysis of drugs or bulk materials to determine the permitted daily exposure PDE in liquid or solid specimens, and to obtain better results than other techniques, by the way it does not require complex methods or chemicals for digestion, which interfere with the final results with the possibility of to keep the sample at any time for re analysis. The recommendation is to use this technique in pharmacopeia as standard methods like inductively coupled plasma both ICP-AES, ICP-OES, and ICP-MS.

Keywords: pharmacopoeia, FESEM-EDS, element impurities, atomic concentration

Procedia PDF Downloads 100
39164 Actual Fracture Length Determination Using a Technique for Shale Fracturing Data Analysis in Real Time

Authors: M. Wigwe, M. Y Soloman, E. Pirayesh, R. Eghorieta, N. Stegent

Abstract:

The moving reference point (MRP) technique has been used in the analyses of the first three stages of two fracturing jobs. The results obtained verify the proposition that a hydraulic fracture in shale grows in spurts rather than in a continuous pattern as originally interpreted by Nolte-Smith technique. Rather than a continuous Mode I fracture that is followed by Mode II, III or IV fractures, these fracture modes could alternate throughout the pumping period. It is also shown that the Nolte-Smith time parameter plot can be very helpful in identifying the presence of natural fractures that have been intersected by the hydraulic fracture. In addition, with the aid of a fracture length-time plot generated from any fracture simulation that matches the data, the distance from the wellbore to the natural fractures, which also translates to the actual fracture length for the stage, can be determined. An algorithm for this technique is developed. This procedure was used for the first 9 minutes of the simulated frac job data. It was observed that after 7mins, the actual fracture length is about 150ft, instead of 250ft predicted by the simulator output. This difference gets larger as the analysis proceeds.

Keywords: shale, fracturing, reservoir, simulation, frac-length, moving-reference-point

Procedia PDF Downloads 735
39163 Evaluation of Quick Covering Machine for Grain Drying Pavement

Authors: Fatima S. Rodriguez, Victorino T. Taylan, Manolito C. Bulaong, Helen F. Gavino, Vitaliana U. Malamug

Abstract:

In sundrying the quality of the grains are greatly reduced when paddy grains were caught by the rain unsacked and unstored resulting to reduced profit. The objectives of this study were to design and fabricate a quick covering machine for grain drying pavement; to test and evaluate the operating characteristics of the machine according to its deployment speed, recovery speed, deployment time, recovery time, power consumption, aesthetics of laminated sack; and to conduct partial budget and cost curve analysis. The machine was able to cover the grains in a 12.8 m x 22.5 m grain drying pavement at an average time of 17.13 s. It consumed 0.53 W-hr for the deployment and recovery of the cover. The machine entailed an investment cost of $1,344.40 and an annual cost charge of $647.32. Moreover, the savings per year using the quick covering machine was $101.83.

Keywords: quick covering machine, grain drying pavement, laminated polypropylene, recovery time

Procedia PDF Downloads 304
39162 Use of Telephone Counselling in Employee Assistance Program

Authors: Andy S.K. Cheng, Samuel Leung, Cindy Kwok, Hector Tsang

Abstract:

Background: Telephone counselling is one of the essential interventions that can be found in most of the Employee Assistance Programs (EAP). The purposes of this study were to (1) explore the trend of the telephone counselling from 2003-2016 in Hong Kong; (2) explore which EAP issue requires more follow-up; and 3) examine the relationship between the EAP issues and demographic data such as gender and job ranking. Method: Date of EAP services usage was collected from EAP providers in Hong Kong during 2003-2016. EAP issues were categorized into two domains, namely workplace issues and personal issues. Each domain has 12 sub-categories. Two hypotheses were formulated in this study (1) there was a gender difference in EAP issues and the follow-up hours; and (2) there was a significant difference between job ranking, EAP issues and follow-up hours. Results: A total of eight hundred and ninety-three valid cases were identified for analysis. Of them, three hundred and forty-three cases sought for follow-up. The duration of follow-up by hours was calculated for each of the follow-up cases. The results of the study shows that the top three workplace issues that required the longest duration of follow-up were (1) workload, (2) supervisor-subordinate relationship; and (3) team member’s relationship. On the other hand, the top three personal issues that required the longest duration of follow-up were (1) parenting/parent-child relationship, (2) family care, and (3) marital relationship. Two-way ANOVA was performed to compare the total follow-up hours (excluding first intake) between gender and EAP issues. There was no statistical significance for gender (p =.891), but a statistically significant main effect for EAP issues (p <.001) was found. Post-hoc analysis (Tukey’s test) showed that total follow-up hour in personal issues was statistically significant higher than that in handling workplace issues (p <.001). However, there was no statistically significant interaction effect between gender and EAP issues (p=.879) and between job ranking and EAP issues (p=.843). Conclusion: Telephone counselling is a very common intervention in addressing EAP issues arising from workplace and personal level in Hong Kong. It was frequently used to handle interpersonal relationships and the service usage was independent of gender and job ranking.

Keywords: employee assistance program, follow-up time, interpersonal relationships, telephone counselling

Procedia PDF Downloads 202
39161 Effects of Peakedness of Bimodal Waves on Overtopping of Sloping Seawalls

Authors: Stephen Orimoloye, Jose Horrillo-Caraballo, Harshinie Karunarathna, Dominic E. Reeve

Abstract:

Prediction of wave overtopping is an essential component of coastal seawall designing and management. Not only that excessive overtopping is reported for impermeable seawalls under bimodal waves, but overtopping is also showing a high sensitivity to the peakedness of the random wave propagation patterns. In the present study, we present a comprehensive analysis of the effects of peakedness of bimodal wave patterns of the overtopping of sloping seawalls. An energy-conserved bimodal spectrum with four different spectra peak periods and swell percentages was applied to estimate wave overtopping in both numerical and experimental flumes. Results of incident surface elevations and bimodal spectra were accurately captured across the flume domain using sets of well-positioned resistant-type wave gauges. Peakedness characteristics of the wave patterns were extracted to derive a relationship between the non-dimensional overtopping and the peakedness across the wave groups in the wave series. The full paper will briefly describe the development of the spectrum and present a comprehensive results analysis leading to the derivation of the relationship between dimensionless overtopping and peakedness of bimodal waves.

Keywords: wave overtopping, peakedness, bimodal waves, swell percentages

Procedia PDF Downloads 172
39160 Dynamic Analysis of the Heat Transfer in the Magnetically Assisted Reactor

Authors: Tomasz Borowski, Dawid Sołoducha, Rafał Rakoczy, Marian Kordas

Abstract:

The application of magnetic field is essential for a wide range of technologies or processes (i.e., magnetic hyperthermia, bioprocessing). From the practical point of view, bioprocess control is often limited to the regulation of temperature at constant values favourable to microbial growth. The main aim of this study is to determine the effect of various types of electromagnetic fields (i.e., static or alternating) on the heat transfer in a self-designed magnetically assisted reactor. The experimental set-up is equipped with a measuring instrument which controlled the temperature of the liquid inside the container and supervised the real-time acquisition of all the experimental data coming from the sensors. Temperature signals are also sampled from generator of magnetic field. The obtained temperature profiles were mathematically described and analyzed. The parameters characterizing the response to a step input of a first-order dynamic system were obtained and discussed. For example, the higher values of the time constant means slow signal (in this case, temperature) increase. After the period equal to about five-time constants, the sample temperature nearly reached the asymptotic value. This dynamical analysis allowed us to understand the heating effect under the action of various types of electromagnetic fields. Moreover, the proposed mathematical description can be used to compare the influence of different types of magnetic fields on heat transfer operations.

Keywords: heat transfer, magnetically assisted reactor, dynamical analysis, transient function

Procedia PDF Downloads 159
39159 Prevention of Road Accidents by Computerized Drowsiness Detection System

Authors: Ujjal Chattaraj, P. C. Dasbebartta, S. Bhuyan

Abstract:

This paper aims to propose a method to detect the action of the driver’s eyes, using the concept of face detection. There are three major key contributing methods which can rapidly process the framework of the facial image and hence produce results which further can program the reactions of the vehicles as pre-programmed for the traffic safety. This paper compares and analyses the methods on the basis of their reaction time and their ability to deal with fluctuating images of the driver. The program used in this study is simple and efficient, built using the AdaBoost learning algorithm. Through this program, the system would be able to discard background regions and focus on the face-like regions. The results are analyzed on a common computer which makes it feasible for the end users. The application domain of this experiment is quite wide, such as detection of drowsiness or influence of alcohols in drivers or detection for the case of identification.

Keywords: AdaBoost learning algorithm, face detection, framework, traffic safety

Procedia PDF Downloads 145
39158 Driver Take-Over Time When Resuming Control from Highly Automated Driving in Truck Platooning Scenarios

Authors: Bo Zhang, Ellen S. Wilschut, Dehlia M. C. Willemsen, Marieke H. Martens

Abstract:

With the rapid development of intelligent transportation systems, automated platooning of trucks is drawing increasing interest for its beneficial effects on safety, energy consumption and traffic flow efficiency. Nevertheless, one major challenge lies in the safe transition of control from the automated system back to the human drivers, especially when they have been inattentive after a long period of highly automated driving. In this study, we investigated driver take-over time after a system initiated request to leave the platooning system Virtual Tow Bar in a non-critical scenario. 22 professional truck drivers participated in the truck driving simulator experiment, and each was instructed to drive under three experimental conditions before the presentation of the take-over request (TOR): driver ready (drivers were instructed to monitor the road constantly), driver not-ready (drivers were provided with a tablet) and eye-shut. The results showed significantly longer take-over time in both driver not-ready and eye-shut conditions compared with the driver ready condition. Further analysis revealed hand movement time as the main factor causing long response time in the driver not-ready condition, while in the eye-shut condition, gaze reaction time also influenced the total take-over time largely. In addition to comparing the means, large individual differences can be found especially in two driver, not attentive conditions. The importance of a personalized driver readiness predictor for a safe transition is concluded.

Keywords: driving simulation, highly automated driving, take-over time, transition of control, truck platooning

Procedia PDF Downloads 237
39157 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses

Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer

Abstract:

The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.

Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation

Procedia PDF Downloads 162
39156 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data

Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda

Abstract:

Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.

Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation

Procedia PDF Downloads 284
39155 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing

Authors: H. Samadiyeh, R. Khajavi

Abstract:

A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.

Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation

Procedia PDF Downloads 278
39154 X-Ray Diffraction and Mӧssbauer Studies of Nanostructured Ni45Al45Fe10 Powders Elaborated by Mechanical Alloying

Authors: N. Ammouchi

Abstract:

We have studied the effect of milling time on the structural and hyperfine properties of Ni45Al45Fe10 compound elaborated by mechanical alloying. The elaboration was performed by using the planetary ball mill at different milling times. The as milled powders were characterized by X-ray diffraction (XRD) and Mӧssbauer spectroscopy. From XRD diffraction spectra, we show that the β NiAl(Fe) was completely formed after 24 h of milling time. When the milling time increases, the lattice parameter increases, whereas the grain size decreases to a few nanometres and the mean level of microstrains increases. The analysis of Mӧssbauer spectra indicates that, in addition to a ferromagnetic phase, α-Fe, a paramagnetic disordered phase Ni Al (Fe) solid solution is observed after 2h and only this phase is present after 12h.

Keywords: NiAlFe, nanostructured powders, X-ray diffraction, Mӧssbauer spectroscopy

Procedia PDF Downloads 364
39153 Helping the Development of Public Policies with Knowledge of Criminal Data

Authors: Diego De Castro Rodrigues, Marcelo B. Nery, Sergio Adorno

Abstract:

The project aims to develop a framework for social data analysis, particularly by mobilizing criminal records and applying descriptive computational techniques, such as associative algorithms and extraction of tree decision rules, among others. The methods and instruments discussed in this work will enable the discovery of patterns, providing a guided means to identify similarities between recurring situations in the social sphere using descriptive techniques and data visualization. The study area has been defined as the city of São Paulo, with the structuring of social data as the central idea, with a particular focus on the quality of the information. Given this, a set of tools will be validated, including the use of a database and tools for visualizing the results. Among the main deliverables related to products and the development of articles are the discoveries made during the research phase. The effectiveness and utility of the results will depend on studies involving real data, validated both by domain experts and by identifying and comparing the patterns found in this study with other phenomena described in the literature. The intention is to contribute to evidence-based understanding and decision-making in the social field.

Keywords: social data analysis, criminal records, computational techniques, data mining, big data

Procedia PDF Downloads 68
39152 Vortex Generation to Model the Airflow Downstream of a Piezoelectric Fan Array

Authors: Alastair Hales, Xi Jiang, Siming Zhang

Abstract:

Numerical methods are used to generate vortices in a domain. Through considered design, two counter-rotating vortices may interact and effectively drive one another downstream. This phenomenon is comparable to the vortex interaction that occurs in a region immediately downstream from two counter-oscillating piezoelectric (PE) fan blades. PE fans are small blades clamped at one end and driven to oscillate at their first natural frequency by an extremely low powered actuator. In operation, the high oscillation amplitude and frequency generate sufficient blade tip speed through the surrounding air to create downstream air flow. PE fans are considered an ideal solution for low power hot spot cooling in a range of small electronic devices, but a single blade does not typically induce enough air flow to be considered a direct alternative to conventional air movers, such as axial fans. The development of face-to-face PE fan arrays containing multiple blades oscillating in counter-phase to one another is essential for expanding the range of potential PE fan applications regarding the cooling of power electronics. Even in an unoptimised state, these arrays are capable of moving air volumes comparable to axial fans with less than 50% of the power demand. Replicating the airflow generated by face-to-face PE fan arrays without including the actual blades in the model reduces the process’s computational demands and enhances the rate of innovation and development in the field. Vortices are generated at a defined inlet using a time-dependent velocity profile function, which pulsates the inlet air velocity magnitude. This induces vortex generation in the considered domain, and these vortices are shown to separate and propagate downstream in a regular manner. The generation and propagation of a single vortex are compared to an equivalent vortex generated from a PE fan blade in a previous experimental investigation. Vortex separation is found to be accurately replicated in the present numerical model. Additionally, the downstream trajectory of the vortices’ centres vary by just 10.5%, and size and strength of the vortices differ by a maximum of 10.6%. Through non-dimensionalisation, the numerical method is shown to be valid for PE fan blades with differing parameters to the specific case investigated. The thorough validation methods presented verify that the numerical model may be used to replicate vortex formation from an oscillating PE fans blade. An investigation is carried out to evaluate the effects of varying the distance between two PE fan blade, pitch. At small pitch, the vorticity in the domain is maximised, along with turbulence in the near vicinity of the inlet zones. It is proposed that face-to-face PE fan arrays, oscillating in counter-phase, should have a minimal pitch to optimally cool nearby heat sources. On the other hand, downstream airflow is maximised at a larger pitch, where the vortices can fully form and effectively drive one another downstream. As such, this should be implemented when bulk airflow generation is the desired result.

Keywords: piezoelectric fans, low energy cooling, vortex formation, computational fluid dynamics

Procedia PDF Downloads 163
39151 Customer Acquisition through Time-Aware Marketing Campaign Analysis in Banking Industry

Authors: Harneet Walia, Morteza Zihayat

Abstract:

Customer acquisition has become one of the critical issues of any business in the 21st century; having a healthy customer base is the essential asset of the bank business. Term deposits act as a major source of cheap funds for the banks to invest and benefit from interest rate arbitrage. To attract customers, the marketing campaigns at most financial institutions consist of multiple outbound telephonic calls with more than one contact to a customer which is a very time-consuming process. Therefore, customized direct marketing has become more critical than ever for attracting new clients. As customer acquisition is becoming more difficult to archive, having an intelligent and redefined list is necessary to sell a product smartly. Our aim of this research is to increase the effectiveness of campaigns by predicting customers who will most likely subscribe to the fixed deposit and suggest the most suitable month to reach out to customers. We design a Time Aware Upsell Prediction Framework (TAUPF) using two different approaches, with an aim to find the best approach and technique to build the prediction model. TAUPF is implemented using Upsell Prediction Approach (UPA) and Clustered Upsell Prediction Approach (CUPA). We also address the data imbalance problem by examining and comparing different methods of sampling (Up-sampling and down-sampling). Our results have shown building such a model is quite feasible and profitable for the financial institutions. The Time Aware Upsell Prediction Framework (TAUPF) can be easily used in any industry such as telecom, automobile, tourism, etc. where the TAUPF (Clustered Upsell Prediction Approach (CUPA) or Upsell Prediction Approach (UPA)) holds valid. In our case, CUPA books more reliable. As proven in our research, one of the most important challenges is to define measures which have enough predictive power as the subscription to a fixed deposit depends on highly ambiguous situations and cannot be easily isolated. While we have shown the practicality of time-aware upsell prediction model where financial institutions can benefit from contacting the customers at the specified month, further research needs to be done to understand the specific time of the day. In addition, a further empirical/pilot study on real live customer needs to be conducted to prove the effectiveness of the model in the real world.

Keywords: customer acquisition, predictive analysis, targeted marketing, time-aware analysis

Procedia PDF Downloads 107
39150 Impacts of Urban Morphologies on Air Pollutants Dispersion in Porto's Urban Area

Authors: Sandra Rafael, Bruno Vicente, Vera Rodrigues, Carlos Borrego, Myriam Lopes

Abstract:

Air pollution is an environmental and social issue at different spatial scales, especially in a climate change context, with an expected decrease of air quality. Air pollution is a combination of high emissions and unfavourable weather conditions, where wind speed and wind direction play a key role. The urban design (location and structure of buildings and trees) can both promote the air pollutants dispersion as well as promote their retention within the urban area. Today, most of the urban areas are applying measures to adapt to future extreme climatic events. Most of these measures are grounded on nature-based solutions, namely green roofs and green areas. In this sense, studies are required to evaluate how the implementation of these actions will influence the wind flow within the urban area and, consequently, how this will influence air pollutants' dispersion. The main goal of this study was to evaluate the influence of a set of urban morphologies in the wind conditions and in the dispersion of air pollutants, in a built-up area in Portugal. For that, two pollutants were analysed (NOx and PM10) and four scenarios were developed: i) a baseline scenario, which characterizes the current status of the study area, ii) an urban green scenario, which implies the implementation of a green area inside the domain, iii) a green roof scenario, which consists in the implementation of green roofs in a specific area of the domain; iv) a 'grey' scenario, which consists in a scenario with absence of vegetation. For that, two models were used, namely the Weather Research and Forecasting model (WRF) and the CFD model VADIS (pollutant dispersion in the atmosphere under variable wind conditions). The WRF model was used to initialize the CFD model, while the last was used to perform the set of numerical simulations, on an hourly basis. The implementation of the green urban area promoted a reduction of air pollutants' concentrations, 16% on average, related to the increase in the wind flow, which promotes air pollutants dispersion; while the application of green roofs showed an increase of concentrations (reaching 60% during specific time periods). Overall the results showed that a strategic placement of vegetation in cities has the potential to make an important contribution to increase air pollutants dispersion and so promote the improvement of air quality and sustainability of urban environments.

Keywords: air pollutants dispersion, wind conditions, urban morphologies, road traffic emissions

Procedia PDF Downloads 329
39149 A Ground Observation Based Climatology of Winter Fog: Study over the Indo-Gangetic Plains, India

Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva

Abstract:

Every year, fog formation over the Indo-Gangetic Plains (IGPs) of Indian region during the winter months of December and January is believed to create numerous hazards, inconvenience, and economic loss to the inhabitants of this densely populated region of Indian subcontinent. The aim of the paper is to analyze the spatial and temporal variability of winter fog over IGPs. Long term ground observations of visibility and other meteorological parameters (1971-2010) have been analyzed to understand the formation of fog phenomena and its relevance during the peak winter months of January and December over IGP of India. In order to examine the temporal variability, time series and trend analysis were carried out by using the Mann-Kendall Statistical test. Trend analysis performed by using the Mann-Kendall test, accepts the alternate hypothesis with 95% confidence level indicating that there exists a trend. Kendall tau’s statistics showed that there exists a positive correlation between time series and fog frequency. Further, the Theil and Sen’s median slope estimate showed that the magnitude of trend is positive. Magnitude is higher during January compared to December for the entire IGP except in December when it is high over the western IGP. Decade wise time series analysis revealed that there has been continuous increase in fog days. The net overall increase of 99 % was observed over IGP in last four decades. Diurnal variability and average daily persistence were computed by using descriptive statistical techniques. Geo-statistical analysis of fog was carried out to understand the spatial variability of fog. Geo-statistical analysis of fog revealed that IGP is a high fog prone zone with fog occurrence frequency of more than 66% days during the study period. Diurnal variability indicates the peak occurrence of fog is between 06:00 and 10:00 local time and average daily fog persistence extends to 5 to 7 hours during the peak winter season. The results would offer a new perspective to take proactive measures in reducing the irreparable damage that could be caused due to changing trends of fog.

Keywords: fog, climatology, Mann-Kendall test, trend analysis, spatial variability, temporal variability, visibility

Procedia PDF Downloads 229
39148 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: queueing network, discrete-event simulation, health applications, SPT

Procedia PDF Downloads 174
39147 Adolescent Sleep Hygiene Scale and Adolescent Sleep Wake Scale: Factorial Analysis and Validation for Indian Population

Authors: Sataroopa Mishra, Mona Basker, Sneha Varkki, Ram Kumar Pandian, Grace Rebekah

Abstract:

Background: Sleep deprivation is a matter of public health importance among adolescents. We used adolescent sleep wake scale and adolescent sleep hygiene scale to determine the sleep quality and sleep hygiene respectively of school going adolescents in Vellore city of India. The objective of the study was to do factorial analysis of the scales and validate it for use in local population. Methods: Observational questionnaire based cross sectional study. Setting: Community based school survey in a semi-urban setting in three schools in Vellore city. Data collection: Non probability sample was collected form students studying in standard 9 and 11. Students filled Adolescent Sleep Wake scale (ASWS) and Adolescent Sleep Hygiene Scale (ASHS) translated into vernacular language. Data Analysis: Exploratory Factorial Analysis was used to see the factor loading of various components of the two scales. Confirmatory factorial analysis is subsequently planned for assessing the internal validity of the scales.Results: 557 adolescents were included in the study of 12 – 17 years old. Exploratory factorial analysis of adolescent sleep hygiene scale indicated significant factor loading for 18 items from 28 items originally devised by the authors and has been reconstructed to four domains instead of 9 domains in the original scale namely sleep stability, cognitive – emotional, Physiological - bed time routine - behavioural arousal factor (activites before bedtime and during bed time), Sleep environment (lighting and bed sharing). Factorial analysis of Adolescent sleep wake scale showed factor loading of 18 items out of 28 items in original scale reconstructed into 5 aspects of sleep quality. Conclusions: The factorial analysis gives a reconstructed scale useful for the local population. Further a confirmatory factorial analysis has been subsequently planned to determine the internal consistency of the scale for local population.

Keywords: factorial analysis, sleep hygiene, sleep quality, adolescent sleep scale

Procedia PDF Downloads 267
39146 Interaction between Cognitive Control and Language Processing in Non-Fluent Aphasia

Authors: Izabella Szollosi, Klara Marton

Abstract:

Aphasia can be defined as a weakness in accessing linguistic information. Accessing linguistic information is strongly related to information processing, which in turn is associated with the cognitive control system. According to the literature, a deficit in the cognitive control system interferes with language processing and contributes to non-fluent speech performance. The aim of our study was to explore this hypothesis by investigating how cognitive control interacts with language performance in participants with non-fluent aphasia. Cognitive control is a complex construct that includes working memory (WM) and the ability to resist proactive interference (PI). Based on previous research, we hypothesized that impairments in domain-general (DG) cognitive control abilities have negative effects on language processing. In contrast, better DG cognitive control functioning supports goal-directed behavior in language-related processes as well. Since stroke itself might slow down information processing, it is important to examine its negative effects on both cognitive control and language processing. Participants (N=52) in our study were individuals with non-fluent Broca’s aphasia (N = 13), with transcortical motor aphasia (N=13), individuals with stroke damage without aphasia (N=13), and unimpaired speakers (N = 13). All participants performed various computer-based tasks targeting cognitive control functions such as WM and resistance to PI in both linguistic and non-linguistic domains. Non-linguistic tasks targeted primarily DG functions, while linguistic tasks targeted more domain specific (DS) processes. The results showed that participants with Broca’s aphasia differed from the other three groups in the non-linguistic tasks. They performed significantly worse even in the baseline conditions. In contrast, we found a different performance profile in the linguistic domain, where the control group differed from all three stroke-related groups. The three groups with impairment performed more poorly than the controls but similar to each other in the verbal baseline condition. In the more complex verbal PI condition, however, participants with Broca’s aphasia performed significantly worse than all the other groups. Participants with Broca’s aphasia demonstrated the most severe language impairment and the highest vulnerability in tasks measuring DG cognitive control functions. Results support the notion that the more severe the cognitive control impairment, the more severe the aphasia. Thus, our findings suggest a strong interaction between cognitive control and language. Individuals with the most severe and most general cognitive control deficit - participants with Broca’s aphasia - showed the most severe language impairment. Individuals with better DG cognitive control functions demonstrated better language performance. While all participants with stroke damage showed impaired cognitive control functions in the linguistic domain, participants with better language skills performed also better in tasks that measured non-linguistic cognitive control functions. The overall results indicate that the level of cognitive control deficit interacts with the language functions in individuals along with the language spectrum (from severe to no impairment). However, future research is needed to determine any directionality.

Keywords: cognitive control, information processing, language performance, non-fluent aphasia

Procedia PDF Downloads 105
39145 A Study of the Trade-off Energy Consumption-Performance-Schedulability for DVFS Multicore Systems

Authors: Jalil Boudjadar

Abstract:

Dynamic Voltage and Frequency Scaling (DVFS) multicore platforms are promising execution platforms that enable high computational performance, less energy consumption and flexibility in scheduling the system processes. However, the resulting interleaving and memory interference together with per-core frequency tuning make real-time guarantees hard to be delivered. Besides, energy consumption represents a strong constraint for the deployment of such systems on energy-limited settings. Identifying the system configurations that would achieve a high performance and consume less energy while guaranteeing the system schedulability is a complex task in the design of modern embedded systems. This work studies the trade-off between energy consumption, cores utilization and memory bottleneck and their impact on the schedulability of DVFS multicore time-critical systems with a hierarchy of shared memories. We build a model-based framework using Parametrized Timed Automata of UPPAAL to analyze the mutual impact of performance, energy consumption and schedulability of DVFS multicore systems, and demonstrate the trade-off on an actual case study.

Keywords: time-critical systems, multicore systems, schedulability analysis, energy consumption, performance analysis

Procedia PDF Downloads 93
39144 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 438
39143 Prefabricated Integral Design of Building Services

Authors: Mina Mortazavi

Abstract:

The common approach in the construction industry for restraint requirements in existing structures or new constructions is to have Non-Structural Components (NSCs) assembled and installed on-site by different MEP subcontractors. This leads to a lack of coordination and higher costs, construction time, and complications due to inaccurate building information modelling (BIM) systems. Introducing NSCs to a consistent BIM system from the beginning of the design process and considering their seismic loads in the analysis and design process can improve coordination and reduce costs and time. One solution is to use prefabricated mounts with attached MEPs delivered as an integral module. This eliminates the majority of coordination complications and reduces design and installation costs and time. An advanced approach is to have as many NSCs as possible installed in the same prefabricated module, which gives the structural engineer the opportunity to consider the involved component weights and locations in the analysis and design of the prefabricated support. This efficient approach eliminates coordination and access issues, leading to enhanced quality control. This research will focus on the existing literature on modular sub-assemblies that are integrated with architectural and structural components. Modular MEP systems take advantage of the precision provided by BIM tools to meet exact requirements and achieve a buildable design every time. Modular installations that include MEP systems provide efficient solutions for the installation of MEP services or components.

Keywords: building services, modularisation, prefabrication, integral building design

Procedia PDF Downloads 60
39142 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 344
39141 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 260