Search results for: insurance estimation
1090 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 721089 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 1711088 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 661087 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 2171086 Determining the Effectiveness of Radiation Shielding and Safe Time for Radiation Worker by Employing Monitoring of Accumulation Dose in the Operator Room of CT Scan
Authors: Risalatul Latifah, Bunawas Bunawas, Lailatul Muqmiroh, Anggraini D. Sensusiati
Abstract:
Along with the increasing frequency of the use of CT-Scan for radiodiagnostics purposes, it is necessary to study radiation protection. This study examined aspects of radiation protection of workers. This study tried using thermoluminescent dosimeter (TLD) for evaluating radiation shielding and estimating safe time for workers during CT Scan examination. Six TLDs were placed on door, wall, and window inside and outside of the CT Scan room for 1 month. By using TLD monitoring, it could be seen how much radiation was exposed in the operator room. The results showed the effective dose at door, window, and wall was respectively 0.04 mSv, 0.05 mSv, and 0.04 mSv. With these values, it could be evaluated the effectiveness of radiation shielding on doors, glass and walls were respectively 90.6%, 95.5%, and 92.2%. By applying the dose constraint and the estimation of the accumulated dose for one month, radiation workers were still safe to perform the irradiation for 180 patients.Keywords: CT scan room, TLD, radiation worker, dose constraint
Procedia PDF Downloads 2861085 Fluctuations of Transfer Factor of the Mixer Based on Schottky Diode
Authors: Alexey V. Klyuev, Arkady V. Yakimov, Mikhail I. Ryzhkin, Andrey V. Klyuev
Abstract:
Fluctuations of Schottky diode parameters in a structure of the mixer are investigated. These fluctuations are manifested in two ways. At the first, they lead to fluctuations in the transfer factor that is lead to the amplitude fluctuations in the signal of intermediate frequency. On the basis of the measurement data of 1/f noise of the diode at forward current, the estimation of a spectrum of relative fluctuations in transfer factor of the mixer is executed. Current dependence of the spectrum of relative fluctuations in transfer factor of the mixer and dependence of the spectrum of relative fluctuations in transfer factor of the mixer on the amplitude of the heterodyne signal are investigated. At the second, fluctuations in parameters of the diode lead to the occurrence of 1/f noise in the output signal of the mixer. This noise limits the sensitivity of the mixer to the value of received signal.Keywords: current-voltage characteristic, fluctuations, mixer, Schottky diode, 1/f noise
Procedia PDF Downloads 5841084 Estimation of Tensile Strength for Granitic Rocks by Using Discrete Element Approach
Authors: Aliakbar Golshani, Armin Ramezanzad
Abstract:
Tensile strength which is an important parameter of the rock for engineering applications is difficult to measure directly through physical experiment (i.e. uniaxial tensile test). Therefore, indirect experimental methods such as Brazilian test have been taken into consideration and some relations have been proposed in order to obtain the tensile strength for rocks indirectly. In this research, to calculate numerically the tensile strength for granitic rocks, Particle Flow Code in three-dimension (PFC3D) software were used. First, uniaxial compression tests were simulated and the tensile strength was determined for Inada granite (from a quarry in Kasama, Ibaraki, Japan). Then, by simulating Brazilian test condition for Inada granite, the tensile strength was indirectly calculated again. Results show that the tensile strength calculated numerically agrees well with the experimental results obtained from uniaxial tensile tests on Inada granite samples.Keywords: numerical simulation, particle flow code, PFC, tensile strength, Brazilian Test
Procedia PDF Downloads 1901083 Sub-Pixel Level Classification Using Remote Sensing For Arecanut Crop
Authors: S. Athiralakshmi, B.E. Bhojaraja, U. Pruthviraj
Abstract:
In agriculture, remote sensing is applied for monitoring of plant development, evaluating of physiological processes and growth conditions. Especially valuable are the spatio-temporal aspects of the remotely sensed data in detecting crop state differences and stress situations. In this study, hyperion imagery is used for classifying arecanut crops based on their age so that these maps can be used in yield estimation of crops, irrigation purposes, applying fertilizers etc. Traditional hard classifiers assigns the mixed pixels to the dominant classes. The proposed method uses a sub pixel level classifier called linear spectral unmixing available in ENVI software. It provides the relative abundance of surface materials and the context within a pixel that may be a potential solution to effectively identifying the land-cover distribution. Validation is done referring to field spectra collected using spectroradiometer and the ground control points obtained from GPS.Keywords: FLAASH, Hyperspectral remote sensing, Linear Spectral Unmixing, Spectral Angle Mapper Classifier.
Procedia PDF Downloads 5181082 Research and Application of the Three-Dimensional Visualization Geological Modeling of Mine
Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three dimensional visualization geological modeling of mine is the digital characterization of mineral deposit, and is one of the key technology of digital mine. The three-dimensional geological modeling is a technology that combines the geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in three-dimensional environment with computer technology, and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provided scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 671081 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.Keywords: regression, piecewise, Bayesian, reversible Jump MCMC
Procedia PDF Downloads 5191080 Digitalization in Aggregate Quarries
Authors: José Eugenio Ortiz, Pierre Plaza, Josefa Herrero, Iván Cabria, José Luis Blanco, Javier Gavilanes, José Ignacio Escavy, Ignacio López-Cilla, Virginia Yagüe, César Pérez, Silvia Rodríguez, Jorge Rico, Cecilia Serrano, Jesús Bernat
Abstract:
The development of Artificial Intelligence services in mining processes, specifically in aggregate quarries, is facilitating automation and improving numerous aspects of operations. Ultimately, AI is transforming the mining industry by improving efficiency, safety and sustainability. With the ability to analyze large amounts of data and make autonomous decisions, AI offers great opportunities to optimize mining operations and maximize the economic and social benefits of this vital industry. Within the framework of the European DIGIECOQUARRY project, various services were developed for the identification of material quality, production estimation, detection of anomalies and prediction of consumption and production automatically with good results.Keywords: aggregates, artificial intelligence, automatization, mining operations
Procedia PDF Downloads 861079 Energy Analysis of Seasonal Air Conditioning Demand of All Income Classes Using Bottom up Model in Pakistan
Authors: Saba Arif, Anam Nadeem, Roman Kalvin, Tanzeel Rashid, Burhan Ali, Juntakan Taweekun
Abstract:
Currently, the energy crisis is taking serious attention. Globally, industries and building are major share takers of energy. 72% of total global energy is consumed by residential houses, markets, and commercial building. Additionally, in appliances air conditioners are major consumer of electricity; about 60% energy is used for cooling purpose in houses due to HVAC units. Energy demand will aid in determining what changes will be needed whether it is the estimation of the required energy for households or instituting conservation measures. Bottom-up model is one of the most famous methods for forecasting. In current research bottom-up model of air conditioners' energy consumption in all income classes in comparison with seasonal variation and hourly consumption is calculated. By comparison of energy consumption of all income classes by usage of air conditioners, total consumption of actual demand and current availability can be seen.Keywords: air conditioning, bottom up model, income classes, energy demand
Procedia PDF Downloads 2461078 “Towards Creating a Safe Future”: An Assessment of the Causes of Flooding in Nsanje District, Lower Shire Malawi
Authors: Davie Hope Moyo
Abstract:
The environment is a combination of two things: resources and hazards. One of the hazards that is a result of environmental changes is the occurrence of flooding. Floods are one of the disasters that are highly feared by people because they have a huge impact on the human population and their environment. In recent years, flooding disasters in the Nsanje district are increasing in both frequency and magnitude. This study aims to understand the root causes of this phenomenon. To understand the causes of flooding, this study focused on the case of TA Ndamera in the Nsanje district, southern Malawi. People in the Nsanje district face disruption in their day-to-day life because of floods that affect their communities. When floods happen, people lose their property, land, livestock, and even lives. The study was carried out in order to have a better understanding of the root causes of floods. The findings of this study may help the government and other development agencies to put in place mitigation measures that will make Nsanje District resilient to the occurrence of future flood hazards. Data was collected from the area of TA Ndamera in order to assess the causes of flooding in the district. Interviews, transect walks, and researcher observation was done to appreciate the topography of the district and evaluate other factors that are making the people become vulnerable to the impacts of flooding in the district. It was found that flooding in the district is mainly caused by heavy rainfall in the upper shire, settlements along river banks, deforestation, and the topography of the district in general. The research study ends by providing recommendation strategies that need to be put in place to increase the resilience of the communities to future flood hazards. The research recommends the development of indigenous knowledge systems to alert people of incoming floods, construction of evacuation centers to ease pressure on schools, savings, and insurance schemes, construction of dykes, desilting rivers, and afforestation.Keywords: disaster causes, mitigation, safety measures, Nsanje Malawi
Procedia PDF Downloads 801077 Perspectives of Renewable Energy in 21st Century in India: Statistics and Estimation
Authors: Manoj Kumar, Rajesh Kumar
Abstract:
With the favourable geographical conditions at Indian-subcontinent, it is suitable for flourishing renewable energy. Increasing amount of dependence on coal and other conventional sources is driving the world into pollution and depletion of resources. This paper presents the statistics of energy consumption and energy generation in Indian Sub-continent, which notifies us with the increasing energy demands surpassing energy generation. With the aggrandizement in demand for energy, usage of coal has increased, since the major portion of energy production in India is from thermal power plants. The increase in usage of thermal power plants causes pollution and depletion of reserves; hence, a paradigm shift to renewable sources is inevitable. In this work, the capacity and potential of renewable sources in India are analyzed. Based on the analysis of this work, future potential of these sources is estimated.Keywords: depletion of reserves, energy consumption and generation, emmissions, global warming, renewable sources
Procedia PDF Downloads 4291076 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems
Authors: Alexander Norbach
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver
Procedia PDF Downloads 1291075 Estimation of Seismic Deformation Demands of Tall Buildings with Symmetric Setbacks
Authors: Amir Alirezaei, Shahram Vahdani
Abstract:
This study estimates the seismic demands of tall buildings with central symmetric setbacks by using nonlinear time history analysis. Three setback structures, all 60-story high with setback in three levels, are used for evaluation. The effects of irregularities occurred by setback, are evaluated by determination of global-drift, story-displacement and story drift. Story-displacement is modified by roof displacement and first story displacement and story drift is modified by global drift. All results are calculated at the center of mass and in x and y direction. Also the absolute values of these quantities are determined. The results show that increasing of vertical irregularities increases the global drift of the structure and enlarges the deformations in the height of the structure. It is also observed that the effects of geometry irregularity in the seismic deformations of setback structures are higher than those of mass irregularity.Keywords: deformation demand, drift, setback, tall building
Procedia PDF Downloads 4221074 Prediction of Mechanical Strength of Multiscale Hybrid Reinforced Cementitious Composite
Authors: Salam Alrekabi, A. B. Cundy, Mohammed Haloob Al-Majidi
Abstract:
Novel multiscale hybrid reinforced cementitious composites based on carbon nanotubes (MHRCC-CNT), and carbon nanofibers (MHRCC-CNF) are new types of cement-based material fabricated with micro steel fibers and nanofilaments, featuring superior strain hardening, ductility, and energy absorption. This study focused on established models to predict the compressive strength, and direct and splitting tensile strengths of the produced cementitious composites. The analysis was carried out based on the experimental data presented by the previous author’s study, regression analysis, and the established models that available in the literature. The obtained models showed small differences in the predictions and target values with experimental verification indicated that the estimation of the mechanical properties could be achieved with good accuracy.Keywords: multiscale hybrid reinforced cementitious composites, carbon nanotubes, carbon nanofibers, mechanical strength prediction
Procedia PDF Downloads 1601073 Ex-Post Export Data for Differentiated Products Revealing the Existence of Productcycles
Authors: Ranajoy Bhattcharyya
Abstract:
We estimate international product cycles as shifting product spaces by using 1976 to 2010 UN Comtrade data on all differentiated tradable products in all countries. We use a product space approach to identify the representative product baskets of high-, middle and low-income countries and then use these baskets to identify the patterns of change in comparative advantage of countries over time. We find evidence of a product cycle in two senses: First, high-, middle- and low-income countries differ in comparative advantage, and high-income products migrate to the middle-income basket. A similar pattern is observed for middle- and low-income countries. Our estimation of the lag shows that middle-income countries tend to quickly take up the products of high-income countries, but low-income countries take a longer time absorbing these products. Thus, the gap between low- and middle-income countries is considerably higher than that between middle- and high-income nations.Keywords: product cycle, comparative advantage, representative product basket, ex-post data
Procedia PDF Downloads 4191072 Digital Transformation, Financing Microstructures, and Impact on Well-Being and Income Inequality
Authors: Koffi Sodokin
Abstract:
Financing microstructures are increasingly seen as a means of financial inclusion and improving overall well-being in developing countries. In practice, digital transformation in finance can accelerate the optimal functioning of financing microstructures, such as access by households to microfinance and microinsurance. Large households' access to finance can lead to a reduction in income inequality and an overall improvement in well-being. This paper explores the impact of access to digital finance and financing microstructures on household well-being and the reduction of income inequality. To this end, we use the propensity score matching, the double difference, and the smooth instrumental quantile regression as estimation methods with two periods of survey data. The paper uses the FinScope consumer data (2016) and the Harmonized Living Standards Measurement Study (2018) from Togo in a comparative perspective. The results indicate that access to digital finance, as a cultural game changer, and to financing microstructures improves overall household well-being and contributes significantly to reducing income inequality.Keywords: financing microstructure, microinsurance, microfinance, digital finance, well-being, income inequality
Procedia PDF Downloads 881071 Urea Amperometric Biosensor Based on Entrapment Immobilization of Urease onto a Nanostructured Polypyrrol and Multi-Walled Carbon Nanotube
Authors: Hamide Amani, Afshin FarahBakhsh, Iman Farahbakhsh
Abstract:
In this paper, an amprometric biosensor based on surface modified polypyrrole (PPy) has been developed for the quantitative estimation of urea in aqueous solutions. The incorporation of urease (Urs) into a bipolymeric substrate consisting of PPy was performed by entrapment to the polymeric matrix, PPy acts as amperometric transducer in these biosensors. To increase the membrane conductivity, multi-walled carbon nanotubes (MWCNT) were added to the PPy solution. The entrapped MWCNT in PPy film and the bipolymer layers were prepared for construction of Pt/PPy/MWCNT/Urs. Two different configurations of working electrodes were evaluated to investigate the potential use of the modified membranes in biosensors. The evaluation of two different configurations of working electrodes suggested that the second configuration, which was composed of an electrode-mediator-(pyrrole and multi-walled carbon nanotube) structure and enzyme, is the best candidate for biosensor applications.Keywords: urea biosensor, polypyrrole, multi-walled carbon nanotube, urease
Procedia PDF Downloads 3271070 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.Keywords: flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis
Procedia PDF Downloads 1761069 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation
Procedia PDF Downloads 2591068 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent
Procedia PDF Downloads 1241067 Spatio-Temporal Analysis and Mapping of Malaria in Thailand
Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit
Abstract:
This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation
Procedia PDF Downloads 4541066 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India
Authors: Susmita Ghosh
Abstract:
Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management
Procedia PDF Downloads 2001065 Vibration Based Structural Health Monitoring of Connections in Offshore Wind Turbines
Authors: Cristobal García
Abstract:
The visual inspection of bolted joints in wind turbines is dangerous, expensive, and impractical due to the non-possibility to access the platform by workboat in certain sea state conditions, as well as the high costs derived from the transportation of maintenance technicians to offshore platforms located far away from the coast, especially if helicopters are involved. Consequently, the wind turbine operators have the need for simpler and less demanding techniques for the analysis of the bolts tightening. Vibration-based structural health monitoring is one of the oldest and most widely-used means for monitoring the health of onshore and offshore wind turbines. The core of this work is to find out if the modal parameters can be efficiently used as a key performance indicator (KPIs) for the assessment of joint bolts in a 1:50 scale tower of a floating offshore wind turbine (12 MW). A non-destructive vibration test is used to extract the vibration signals of the towers with different damage statuses. The procedure can be summarized in three consecutive steps. First, an artificial excitation is introduced by means of a commercial shaker mounted on the top of the tower. Second, the vibration signals of the towers are recorded for 8 s at a sampling rate of 20 kHz using an array of commercial accelerometers (Endevco, 44A16-1032). Third, the natural frequencies, damping, and overall vibration mode shapes are calculated using the software Siemens LMS 16A. Experiments show that the natural frequencies, damping, and mode shapes of the tower are directly dependent on the fixing conditions of the towers, and therefore, the variations of both parameters are a good indicator for the estimation of the static axial force acting in the bolt. Thus, this vibration-based structural method proposed can be potentially used as a diagnostic tool to evaluate the tightening torques of the bolted joints with the advantages of being an economical, straightforward, and multidisciplinary approach that can be applied for different typologies of connections by operation and maintenance technicians. In conclusion, TSI, in collaboration with the consortium of the FIBREGY project, is conducting innovative research where vibrations are utilized for the estimation of the tightening torque of a 1:50 scale steel-based tower prototype. The findings of this research carried out in the context of FIBREGY possess multiple implications for the assessment of the bolted joint integrity in multiple types of connections such as tower-to-nacelle, modular, tower-to-column, tube-to-tube, etc. This research is contextualized in the framework of the FIBREGY project. The EU-funded FIBREGY project (H2020, grant number 952966) will evaluate the feasibility of the design and construction of a new generation of marine renewable energy platforms using lightweight FRP materials in certain structural elements (e.g., tower, floating platform). The FIBREGY consortium is composed of 11 partners specialized in the offshore renewable energy sector and funded partially by the H2020 program of the European Commission with an overall budget of 8 million Euros.Keywords: SHM, vibrations, connections, floating offshore platform
Procedia PDF Downloads 1241064 The Role Previous Cytomegalovirus Infection in Subsequent Lymphoma Develompment
Authors: Amalia Ardeljan, Lexi Frankel, Divesh Manjani, Gabriela Santizo, Maximillian Guerra, Omar Rashid
Abstract:
Introduction: Cytomegalovirus (CMV) infection is a widespread infection affecting between 60-70% of people in industrialized countries. CMV has been previously correlated with a higher incidence of Hodgkin Lymphoma compared to noninfected persons. Research regarding prior CMV infection and subsequent lymphoma development is still controversial. With limited evidence, further research is needed in order to understand the relationship between previous CMV infection and subsequent lymphoma development. This study assessed the effect of CMV infection and the incidence of lymphoma afterward. Methods: A retrospective cohort study (2010-2019) was conducted through a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using International Classification of Disease (ICD) 9th,10th codes, and Current Procedural Terminology (CPT) codes. These were used to identify lymphoma diagnosis in a previously CMV infected population. Patients were matched for age range and Charlson Comorbidity Index (CCI). A chi-squared test was used to assess statistical significance. Results: A total number of 14,303 patients was obtained in the CMV infected group as well as in the control population (matched by age range and CCI score). Subsequent lymphoma development was seen at a rate of 11.44% (1,637) in the CMV group and 5.74% (822) in the control group, respectively. The difference was statistically significant by p= 2.2x10-16, odds ratio = 2.696 (95% CI 2.483- 2.927). In an attempt to stratify the population by antiviral medication exposure, the outcomes were limited by the decreased number of members exposed to antiviral medication in the control population. Conclusion: This study shows a statistically significant correlation between prior CMV infection and an increased incidence of lymphoma afterward. Further exploration is needed to identify the potential carcinogenic mechanism of CMV and whether the results are attributed to a confounding bias.Keywords: cytomegalovirus, lymphoma, cancer, microbiology
Procedia PDF Downloads 2181063 Influence of Major Axis on the Aerodynamic Characteristics of Elliptical Section
Authors: K. B. Rajasekarababu, J. Karthik, G. Vinayagamurthy
Abstract:
This paper is intended to explain the influence of major axis on aerodynamic characteristics of elliptical section. Many engineering applications such as off shore structures, bridge piers, civil structures and pipelines can be modelled as a circular cylinder but flow over complex bodies like, submarines, Elliptical wing, fuselage, missiles, and rotor blades, in which the parameters such as axis ratio can influence the flow characteristics of the wake and nature of separation. Influence of Major axis in Flow characteristics of elliptical sections are examined both experimentally and computationally in this study. For this research, four elliptical models with varying major axis [*AR=1, 4, 6, 10] are analysed. Experimental works have been conducted in a subsonic wind tunnel. Furthermore, flow characteristics on elliptical model are predicted from k-ε turbulence model using the commercial CFD packages by pressure based transient solver with Standard wall conditions.The analysis can be extended to estimation and comparison of Drag coefficient and Fatigue analysis of elliptical sections.Keywords: elliptical section, major axis, aerodynamic characteristics, k-ε turbulence model
Procedia PDF Downloads 4341062 SARS-CoV-2 Transmission Risk Factors among Patients from a Metropolitan Community Health Center, Puerto Rico, July 2020 to March 2022
Authors: Juan C. Reyes, Linnette Rodríguez, Héctor Villanueva, Jorge Vázquez, Ivonne Rivera
Abstract:
On July 2020, a private non-profit community health center (HealthProMed) that serves people without a medical insurance plan or with limited resources in one of the most populated areas in San Juan, Puerto Rico, implemented a COVID-19 case investigation and contact-tracing surveillance system. Nursing personnel at the health center completed a computerized case investigation form that was translated, adapted, and modified from CDC’s Patient Under Investigation (PUI) Form. Between July 13, 2020, and March 17, 2022, a total of 9,233 SARS-CoV-2 tests were conducted at the health center, 16.9% of which were classified as confirmed cases (positive molecular test) and 27.7% as probable cases (positive serologic test). Most of the confirmed cases were females (60.0%), under 20 years old (29.1%), and living in their homes (59.1%). In the 14 days before the onset of symptoms, 26.3% of confirmed cases reported going to the supermarket, 22.4% had contact with a known COVID-19 case, and 20.7% went to work. The symptoms most commonly reported were sore throat (33.4%), runny nose (33.3%), cough (24.9%), and headache (23.2%). The most common preexisting medical conditions among confirmed cases were hypertension (19.3%), chronic lung disease including asthma, emphysema, COPD (13.3%), and diabetes mellitus (12.8). Multiple logistic regression analysis revealed that patients who used alcohol frequently during the last two weeks (OR=1.43; 95%CI: 1.15-1.77), those who were in contact with a positive case (OR=1.58; 95%CI: 1.33-1.88) and those who were obese (OR=1.82; 95%CI: 1.24-2.69) were significantly more likely to be a confirmed case after controlling for sociodemographic variables. Implementing a case investigation and contact-tracing component at community health centers can be of great value in the prevention and control of COVID-19 at the community level and could be used in future outbreaks.Keywords: community health center, Puerto Rico, risk factors, SARS-CoV-2
Procedia PDF Downloads 1131061 Using Classifiers to Predict Student Outcome at Higher Institute of Telecommunication
Authors: Fuad M. Alkoot
Abstract:
We aim at highlighting the benefits of classifier systems especially in supporting educational management decisions. The paper aims at using classifiers in an educational application where an outcome is predicted based on given input parameters that represent various conditions at the institute. We present a classifier system that is designed using a limited training set with data for only one semester. The achieved system is able to reach at previously known outcomes accurately. It is also tested on new input parameters representing variations of input conditions to see its prediction on the possible outcome value. Given the supervised expectation of the outcome for the new input we find the system is able to predict the correct outcome. Experiments were conducted on one semester data from two departments only, Switching and Mathematics. Future work on other departments with larger training sets and wider input variations will show additional benefits of classifier systems in supporting the management decisions at an educational institute.Keywords: machine learning, pattern recognition, classifier design, educational management, outcome estimation
Procedia PDF Downloads 275