Search results for: time domain analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40017

Search results for: time domain analysis

36537 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 169
36536 Sensitivity of Credit Default Swaps Premium to Global Risk Factor: Evidence from Emerging Markets

Authors: Oguzhan Cepni, Doruk Kucuksarac, M. Hasan Yilmaz

Abstract:

Risk premium of emerging markets are moving altogether depending on the momentum and shifts in the global risk appetite. However, the magnitudes of these changes in the risk premium of emerging market economies might vary. In this paper, we focus on how global risk factor affects credit default swaps (CDS) premiums of emerging markets using principal component analysis (PCA) and rolling regressions. PCA results indicate that the first common component accounts for almost 76% of common variation in CDS premiums of emerging markets. Additionally, the explanatory power of the first factor seems to be high over sample period. However, the sensitivity to the global risk factor tends to change over time and across countries. In this regard, fixed effects panel regressions are employed to identify the macroeconomic factors driving the heterogeneity across emerging markets. There are two main macroeconomic variables that affect the sensitivity; government debt to GDP and international reserves to GDP. The countries with lower government debt and higher reserves tend to be less subject to the variations in the global risk appetite.

Keywords: emerging markets, principal component analysis, credit default swaps, sovereign risk

Procedia PDF Downloads 367
36535 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling

Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar

Abstract:

Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.

Keywords: toolpath, part program, optimization, pocket

Procedia PDF Downloads 278
36534 Application of Neutron-Gamma Technologies for Soil Elemental Content Determination and Mapping

Authors: G. Yakubova, A. Kavetskiy, S. A. Prior, H. A. Torbert

Abstract:

In-situ soil carbon determination over large soil surface areas (several hectares) is required in regard to carbon sequestration and carbon credit issues. This capability is important for optimizing modern agricultural practices and enhancing soil science knowledge. Collecting and processing representative field soil cores for traditional laboratory chemical analysis is labor-intensive and time-consuming. The neutron-stimulated gamma analysis method can be used for in-situ measurements of primary elements in agricultural soils (e.g., Si, Al, O, C, Fe, and H). This non-destructive method can assess several elements in large soil volumes with no need for sample preparation. Neutron-gamma soil elemental analysis utilizes gamma rays issued from different neutron-nuclei interactions. This process has become possible due to the availability of commercial portable pulse neutron generators, high-efficiency gamma detectors, reliable electronics, and measurement/data processing software complimented by advances in state-of-the-art nuclear physics methods. In Pulsed Fast Thermal Neutron Analysis (PFTNA), soil irradiation is accomplished using a pulsed neutron flux, and gamma spectra acquisition occurs both during and between pulses. This method allows the inelastic neutron scattering (INS) gamma spectrum to be separated from the thermal neutron capture (TNC) spectrum. Based on PFTNA, a mobile system for field-scale soil elemental determinations (primarily carbon) was developed and constructed. Our scanning methodology acquires data that can be directly used for creating soil elemental distribution maps (based on ArcGIS software) in a reasonable timeframe (~20-30 hectares per working day). Created maps are suitable for both agricultural purposes and carbon sequestration estimates. The measurement system design, spectra acquisition process, strategy for acquiring field-scale carbon content data, and mapping of agricultural fields will be discussed.

Keywords: neutron gamma analysis, soil elemental content, carbon sequestration, carbon credit, soil gamma spectroscopy, portable neutron generators, ArcMap mapping

Procedia PDF Downloads 81
36533 Seismic Response and Sensitivity Analysis of Circular Shallow Tunnels

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground tunnels are one of the most popular public facilities for various applications such as transportation, water transfer, network utilities and etc. Experience from the past earthquake reveals that the underground tunnels also become vulnerable components and may damage at certain percentage depending on the level of ground shaking and induced phenomena. In this paper a numerical analysis is conducted in evaluating the sensitivity of two types of circular shallow tunnel lining models to wide ranging changes in the geotechnical design parameter. Critical analysis has been presented about the current methods of analysis, structural typology, ground motion characteristics, effect of soil conditions and associated uncertainties on the tunnel integrity. The response of the tunnel is evaluated through 2D non-linear finite element analysis, which critically assesses the impact of increasing levels of seismic loads. The finding from this study offer significant information on improving methods to assess the vulnerability of underground structures.

Keywords: geotechnical design parameter, seismic response, sensitivity analysis, shallow tunnel

Procedia PDF Downloads 430
36532 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain

Authors: Rohit Shrivastava, Stefan Luding

Abstract:

A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.

Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation

Procedia PDF Downloads 273
36531 Stability of the Wellhead in the Seabed in One of the Marine Reservoirs of Iran

Authors: Mahdi Aghaei, Saeid Jamshidi, Mastaneh Hajipour

Abstract:

Effective factors on the mechanical wellbore stability are divided in to two categories: 1) Controllable factors, 2) Uncontrollable factors. The purpose of geo-mechanical modeling of wells is to determine the limit of controlled parameters change based on the stress regime at each point and by solving the governing equations the pore-elastic environment around the well. In this research, the mechanical analysis of wellbore stability was carried out for Soroush oilfield. For this purpose, the geo-mechanical model of the field is made using available data. This model provides the necessary parameters for obtaining the distribution of stress around the wellbore. Initially, a basic model was designed to perform various analysis, based on obtained data, using Abaqus software. All of the subsequent sensitivity analysis such as sensitivity analysis on porosity, permeability, etc. was done on the same basic model. The results obtained from these analysis gives various result such as: with the constant geomechanical parameters, and sensitivity analysis on porosity permeability is ineffective. After the most important parameters affecting the wellbore stability and instability are geo-mechanical parameters.

Keywords: wellbore stability, movement, stress, instability

Procedia PDF Downloads 190
36530 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 359
36529 Impact of Air Flow Structure on Distinct Shape of Differential Pressure Devices

Authors: A. Bertašienė

Abstract:

Energy harvesting from any structure makes a challenge. Different structure of air/wind flows in industrial, environmental and residential applications emerge the real flow investigation in detail. Many of the application fields are hardly achievable to the detailed description due to the lack of up-to-date statistical data analysis. In situ measurements aim crucial investments thus the simulation methods come to implement structural analysis of the flows. Different configurations of testing environment give an overview how important is the simple structure of field in limited area on efficiency of the system operation and the energy output. Several configurations of modeled working sections in air flow test facility was implemented in CFD ANSYS environment to compare experimentally and numerically air flow development stages and forms that make effects on efficiency of devices and processes. Effective form and amount of these flows under different geometry cases define the manner of instruments/devices that measure fluid flow parameters for effective operation of any system and emission flows to define. Different fluid flow regimes were examined to show the impact of fluctuations on the development of the whole volume of the flow in specific environment. The obtained results rise the discussion on how these simulated flow fields are similar to real application ones. Experimental results have some discrepancies from simulation ones due to the models implemented to fluid flow analysis in initial stage, not developed one and due to the difficulties of models to cover transitional regimes. Recommendations are essential for energy harvesting systems in both, indoor and outdoor cases. Further investigations aim to be shifted to experimental analysis of flow under laboratory conditions using state-of-the-art techniques as flow visualization tool and later on to in situ situations that is complicated, cost and time consuming study.

Keywords: fluid flow, initial region, tube coefficient, distinct shape

Procedia PDF Downloads 327
36528 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad

Abstract:

The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1

Procedia PDF Downloads 74
36527 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process

Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani

Abstract:

Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.

Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process

Procedia PDF Downloads 322
36526 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 340
36525 Use of the SWEAT Analysis Approach to Determine the Effectiveness of a School's Implementation of Its Curriculum

Authors: Prakash Singh

Abstract:

The focus of this study is on the use of the SWEAT analysis approach to determine how effectively a school, as an organization, has implemented its curriculum. To gauge the feelings of the teaching staff, unstructured interviews were employed in this study, asking the participants for their ideas and opinions on each of the three identified aspects of the school: instructional materials, media and technology; teachers’ professional competencies; and the curriculum. This investigation was based on the five key components of the SWEAT model: strengths, weaknesses, expectations, abilities, and tensions. The findings of this exploratory study evoke the significance of the SWEAT achievement model as a tool for strategic analysis to be undertaken in any organization. The findings further affirm the usefulness of this analytical tool for human resource development. Employees have expectations, but competency gaps in their professional abilities may hinder them from fulfilling their tasks in terms of their job description. Also, tensions in the working environment can contribute to their experiences of tobephobia (fear of failure). The SWEAT analysis approach detects such shortcomings in any organization and can therefore culminate in the development of programmes to address such concerns. The strategic SWEAT analysis process can provide a clear distinction between success and failure, and between mediocrity and excellence in organizations. However, more research needs to be done on the effectiveness of the SWEAT analysis approach as a strategic analytical tool.

Keywords: SWEAT analysis, strategic analysis, tobephobia, competency gaps

Procedia PDF Downloads 490
36524 Effect of Infill’s in Influencing the Dynamic Responses of Multistoried Structures

Authors: Rahmathulla Noufal E.

Abstract:

Investigating the dynamic responses of high rise structures under the effect of siesmic ground motion is extremely important for the proper analysis and design of multitoried structures. Since the presence of infilled walls strongly influences the behaviour of frame systems in multistoried buildings, there is an increased need for developing guidelines for the analysis and design of infilled frames under the effect of dynamic loads for safe and proper design of buildings. In this manuscript, we evaluate the natural frequencies and natural periods of single bay single storey frames considering the effect of infill walls by using the Eigen value analysis and validating with SAP 2000 (free vibration analysis). Various parameters obtained from the diagonal strut model followed for the free vibration analysis is then compared with the Finite Element model, where infill is modeled as shell elements (four noded). We also evaluated the effect of various parameters on the natural periods of vibration obtained by free vibration analysis in SAP 2000 comparing them with those obtained by the empirical expressions presented in I.S. 1893(Part I)-2002.

Keywords: infilled frame, eigen value analysis, free vibration analysis, diagonal strut model, finite element model, SAP 2000, natural period

Procedia PDF Downloads 313
36523 Kalman Filter Gain Elimination in Linear Estimation

Authors: Nicholas D. Assimakis

Abstract:

In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.

Keywords: discrete time, estimation, Kalman filter, Kalman filter gain

Procedia PDF Downloads 179
36522 Acausal and Causal Model Construction with FEM Approach Using Modelica

Authors: Oke Oktavianty, Tadayuki Kyoutani, Shigeyuki Haruyama, Junji Kaneko, Ken Kaminishi

Abstract:

Modelica has many advantages and it is very useful in modeling and simulation especially for the multi-domain with a complex technical system. However, the big obstacle for a beginner is to understand the basic concept and to build a new system model for a real system. In order to understand how to solve the simple circuit model by hand translation and to get a better understanding of how modelica works, we provide a detailed explanation about solver ordering system in horizontal and vertical sorting and make some proposals for improvement. In this study, some difficulties in using modelica software with the original concept and the comparison with Finite Element Method (FEM) approach is discussed. We also present our textual modeling approach using FEM concept for acausal and causal model construction. Furthermore, simulation results are provided that demonstrate the comparison between using textual modeling with original coding in modelica and FEM concept.

Keywords: FEM, a causal model, modelica, horizontal and vertical sorting

Procedia PDF Downloads 295
36521 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker

Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee

Abstract:

The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.

Keywords: heat transfer, temperature, voice coil, woofer speaker

Procedia PDF Downloads 345
36520 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor

Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi

Abstract:

This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The steam injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low global warming potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A refrigerant has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.

Keywords: freezing, scroll compressor, energy efficiency, vapour injection

Procedia PDF Downloads 25
36519 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 121
36518 Impact of Chimerism on Y-STR DNA Determination: Sex Mismatch Analysis

Authors: Anupuma Raina, Ajay P. Balayan, Prateek Pandya, Pankaj Shrivastava, Uma Kanga, Tulika Seth

Abstract:

DNA fingerprinting analysis aids in personal identification for forensic purposes and has always been a driving motivation for law enforcement agencies in almost all countries since its inception. The introduction of DNA markers (Y-STR) has allowed for greater precision and higher discriminatory power in forensic testing. A criminal/ person committing crime after bone marrow transplantation is a rare situation but not an impossible one. Keeping such a situation in mind, a study was carried out to find out the best biological sample to be used for personal identification, especially in forensic situation. We choose a female patient (recipient) and a male donor. The pre transplant sample (blood) and post transplant samples (blood, buccal swab, hair roots) were collected from the recipient (patient). The same were compared with the blood sample of the donor using DNA FP technique. Post transplant samples were collected at different interval of time (15, 30, 60, and 90 days). The study was carried out using Y-STR kit at 23 loci. The results determined discusses the phenomenon of chimerism and its impact on Y-STR. Hair sample was found the most suitable sample which had no donor DNA profiling up to 90 days.

Keywords: bone marrow transplantation, chimerism, DNA profiling, Y-STR

Procedia PDF Downloads 131
36517 A Heart Arrhythmia Prediction Using Machine Learning’s Classification Approach and the Concept of Data Mining

Authors: Roshani S. Golhar, Neerajkumar S. Sathawane, Snehal Dongre

Abstract:

Background and objectives: As the, cardiovascular illnesses increasing and becoming cause of mortality worldwide, killing around lot of people each year. Arrhythmia is a type of cardiac illness characterized by a change in the linearity of the heartbeat. The goal of this study is to develop novel deep learning algorithms for successfully interpreting arrhythmia using a single second segment. Because the ECG signal indicates unique electrical heart activity across time, considerable changes between time intervals are detected. Such variances, as well as the limited number of learning data available for each arrhythmia, make standard learning methods difficult, and so impede its exaggeration. Conclusions: The proposed method was able to outperform several state-of-the-art methods. Also proposed technique is an effective and convenient approach to deep learning for heartbeat interpretation, that could be probably used in real-time healthcare monitoring systems

Keywords: electrocardiogram, ECG classification, neural networks, convolutional neural networks, portable document format

Procedia PDF Downloads 57
36516 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 126
36515 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 342
36514 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 209
36513 Students Perception of a Gamified Student Engagement Platform as Supportive Technology in Learning

Authors: Pinn Tsin Isabel Yee

Abstract:

Students are increasingly turning towards online learning materials to supplement their education. One such approach would be the gamified student engagement platforms (GSEPs) to instill a new learning culture. Data was collected from closed-ended questions via content analysis techniques. About 81.8% of college students from the Monash University Foundation Year agreed that GSEPs (Quizizz) was an effective tool for learning. Approximately 85.5% of students disagreed that games were a waste of time. GSEPs were highly effective among students to facilitate the learning process.

Keywords: engagement, gamified, Quizizz, technology

Procedia PDF Downloads 90
36512 Intellectual Property Rights and Health Rights: A Feasible Reform Proposal to Facilitate Access to Drugs in Developing Countries

Authors: M. G. Cattaneo

Abstract:

The non-effectiveness of certain codified human rights is particularly apparent with reference to the lack of access to essential drugs in developing countries, which represents a breach of the human right to receive adequate health assistance. This paper underlines the conflict and the legal contradictions between human rights, namely health rights, international Intellectual Property Rights, in particular patent law, as well as international trade law. The paper discusses the crucial links between R&D costs for innovation, patents and new medical drugs, with the goal of reformulating the hierarchies of priorities and of interests at stake in the international intellectual property (IP) law system. Different from what happens today, International patent law should be a legal instrument apt at rebalancing an axiological asymmetry between the (conflicting) needs at stake The core argument in the paper is the proposal of an alternative pathway, namely a feasible proposal for a patent law reform. IP laws tend to balance the benefits deriving from innovation with the costs of the provided monopoly, but since developing countries and industrialized countries are in completely different political and economic situations, it is necessary to (re)modulate such exchange according to the different needs. Based on this critical analysis, the paper puts forward a proposal, called Trading Time for Space (TTS), whereby a longer time for patent exclusive life in western countries (Time) is offered to the patent holder company, in exchange for the latter selling the medical drug at cost price in developing countries (Space). Accordingly, pharmaceutical companies should sell drugs in developing countries at the cost price, or alternatively grant a free license for the sale in such countries, without any royalties or fees. However, such social service shall be duly compensated. Therefore, the consideration for such a service shall be an extension of the temporal duration of the patent’s exclusive in the country of origin that will compensate the reduced profits caused by the supply at the price cost in developing countries.

Keywords: global health, global justice, patent law reform, access to drugs

Procedia PDF Downloads 238
36511 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 325
36510 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 90
36509 Analysis of the Role of Population Ageing on Crosstown Roads' Traffic Accidents Using Latent Class Clustering

Authors: N. Casado-Sanz, B. Guirao

Abstract:

The population aged 65 and over is projected to double in the coming decades. Due to this increase, driver population is expected to grow and in the near future, all countries will be faced with population aging of varying intensity and in unique time frames. This is the greatest challenge facing industrialized nations and due to this fact, the study of the relationships of dependency between population aging and road safety is becoming increasingly relevant. Although the deterioration of driving skills in the elderly has been analyzed in depth, to our knowledge few research studies have focused on the road infrastructure and the mobility of this particular group of users. In Spain, crosstown roads have one of the highest fatality rates. These rural routes have a higher percentage of elderly people who are more dependent on driving due to the absence or limitations of urban public transportation. Analysing road safety in these routes is very complex because of the variety of the features, the dispersion of the data and the complete lack of related literature. The objective of this paper is to identify key factors that cause traffic accidents. The individuals under study were the accidents with killed or seriously injured in Spanish crosstown roads during the period 2006-2015. Latent cluster analysis was applied as a preliminary tool for segmentation of accidents, considering population aging as the main input among other socioeconomic indicators. Subsequently, a linear regression analysis was carried out to estimate the degree of dependence between the accident rate and the variables that define each group. The results show that segmenting the data is very interesting and provides further information. Additionally, the results revealed the clear influence of the aging variable in the clusters obtained. Other variables related to infrastructure and mobility levels, such as the crosstown roads layout and the traffic intensity aimed to be one of the key factors in the causality of road accidents.

Keywords: cluster analysis, population ageing, rural roads, road safety

Procedia PDF Downloads 97
36508 The Impact of Natural Resources on Financial Development: The Global Perspective

Authors: Remy Jonkam Oben

Abstract:

Using a time series approach, this study investigates how natural resources impact financial development from a global perspective over the 1980-2019 period. Some important determinants of financial development (economic growth, trade openness, population growth, and investment) have been added to the model as control variables. Unit root tests have revealed that all the variables are integrated into order one. Johansen's cointegration test has shown that the variables are in a long-run equilibrium relationship. The vector error correction model (VECM) has estimated the coefficient of the error correction term (ECT), which suggests that the short-run values of natural resources, economic growth, trade openness, population growth, and investment contribute to financial development converging to its long-run equilibrium level by a 23.63% annual speed of adjustment. The estimated coefficients suggest that global natural resource rent has a statistically-significant negative impact on global financial development in the long-run (thereby validating the financial resource curse) but not in the short-run. Causality test results imply that neither global natural resource rent nor global financial development Granger-causes each other.

Keywords: financial development, natural resources, resource curse hypothesis, time series analysis, Granger causality, global perspective

Procedia PDF Downloads 148