Search results for: noise estimation
451 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments
Authors: Xiaoqin Wang, Li Yin
Abstract:
Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.Keywords: causal effect, point effect, statistical modelling, sequential causal inference
Procedia PDF Downloads 205450 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 270449 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 344448 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests
Authors: Md. Kausar Alam, Ramin Motamed
Abstract:
The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction
Procedia PDF Downloads 97447 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria
Authors: Oyatayo Kehinde Taofik, Ndabula Christopher
Abstract:
Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.Keywords: erosion, flood, mitigation, morphometry, watershed
Procedia PDF Downloads 86446 Paraplegic Dimensions of Asymmetric Warfare: A Strategic Analysis for Resilience Policy Plan
Authors: Sehrish Qayyum
Abstract:
In this age of constant technology, asymmetrical warfare could not be won. Attuned psychometric study confirms that screaming sometimes is more productive than active retaliation against strong adversaries. Asymmetric warfare is a game of nerves and thoughts with least vigorous participation for large anticipated losses. It creates the condition of paraplegia with partial but permanent immobility, which effects the core warfare operations, being screams rather than active retaliation. When one’s own power is doubted, it gives power to one’s own doubt to ruin all planning either done with superlative cost-benefit analysis. Strategically calculated estimation of asymmetric warfare since the early WWI to WWII, WWII-to Cold War, and then to the current era in three chronological periods exposits that courage makes nations win the battle of warriors to battle of comrades. Asymmetric warfare has been most difficult to fight and survive due to unexpectedness and being lethal despite preparations. Thoughts before action may be the best-assumed strategy to mix Regional Security Complex Theory and OODA loop to develop the Paraplegic Resilience Policy Plan (PRPP) to win asymmetric warfare. PRPP may serve to control and halt the ongoing wave of terrorism, guerilla warfare, and insurgencies, etc. PRPP, along with a strategic work plan, is based on psychometric analysis to deal with any possible war condition and tactic to save millions of innocent lives such that lost in Christchurch New Zealand in 2019, November 2015 Paris attacks, and Berlin market attacks in 2016, etc. Getting tangled into self-imposed epistemic dilemmas results in regret that becomes the only option of performance. It is a descriptive psychometric analysis of war conditions with generic application of probability tests to find the best possible options and conditions to develop PRPP for any adverse condition possible so far. Innovation in technology begets innovation in planning and action-plan to serve as a rheostat approach to deal with asymmetric warfare.Keywords: asymmetric warfare, psychometric analysis, PRPP, security
Procedia PDF Downloads 136445 Technical Efficiency in Organic and Conventional Wheat Farms: Evidence from a Primary Survey from Two Districts of Ganga River Basin, India
Authors: S. P. Singh, Priya, Komal Sajwan
Abstract:
With the increasing spread of organic farming in India, costs, returns, efficiency, and social and environmental sustainability of organic vis-a-vis conventional farming systems have become topics of interest among agriculture scientists, economists, and policy analysts. A study on technical efficiency estimation under these farming systems, particularly in the Ganga River Basin, where the promotion of organic farming is incentivized, can help to understand whether the inputs are utilized to their maximum possible level and what measures can be taken to improve the efficiency. This paper, therefore, analyses the technical efficiency of wheat farms operating under organic and conventional farming systems. The study is based on a primary survey of 600 farms (300 organic ad 300 conventional) conducted in 2021 in two districts located in the Middle Ganga River Basin, India. Technical, managerial, and scale efficiencies of individual farms are estimated by applying the data envelopment analysis (DEA) methodology. The per hectare value of wheat production is taken as an output variable, and values of seeds, human labour, machine cost, plant nutrients, farm yard manure (FYM), plant protection, and irrigation charges are considered input variables for estimating the farm-level efficiencies. The post-DEA analysis is conducted using the Tobit regression model to know the efficiency determining factors. The results show that technical efficiency is significantly higher in conventional than organic farming systems due to a higher gap in scale efficiency than managerial efficiency. Further, 9.8% conventional and only 1.0% organic farms are found operating at the most productive scale size (MPSS), and 99% organic and 81% conventional farms at IRS. Organic farms perform well in managerial efficiency, but their technical efficiency is lower than conventional farms, mainly due to their relatively lower scale size. The paper suggests that technical efficiency in organic wheat can be increased by upscaling the farm size by incentivizing group/collective farming in clusters.Keywords: organic, conventional, technical efficiency, determinants, DEA, Tobit regression
Procedia PDF Downloads 99444 Case Study: The Analysis of Maturity of West Buru Basin and the Potential Development of Geothermal in West Buru Island
Authors: Kefi Rahmadio, Filipus Armando Ginting, Richard Nainggolan
Abstract:
This research shows the formation of the West Buru Basin and the potential utilization of this West Buru Basin as a geothermal potential. The research area is West Buru Island which is part of the West Buru Basin. The island is located in Maluku Province, with its capital city named Namlea. The island is divided into 10 districts, namely District Kepalamadan, Airbuaya District, Wapelau District, Namlea District, Waeapo District, Batabual District, Namrole District, Waesama District, Leksula District, and Ambalau District. The formation in this basin is Permian-Quarter. They start from the Formation Ghegan, Dalan Formation, Mefa Formation, Kuma Formation, Waeken Formation, Wakatin Formation, Ftau Formation and Leko Formation. These formations are composing this West Buru Basin. Determination of prospect area in the geothermal area with preliminary investigation stage through observation of manifestation, topographic shape and structure are found around prospect area. This is done because there is no data of earth that support the determination of prospect area more accurately. In Waepo area, electric power generated based on field observation and structural analysis, geothermal area of Waeapo was approximately 6 km², with reference to the SNI 'Classification of Geothermal Potential' (No.03-5012-1999), an area of 1 km² is assumed to be 12.5 MWe. The speculative potential of this area is (Q) = 6 x 12.5 MWe = 75 MWe. In the Bata Bual area, the geothermal prospect projected 4 km², the speculative potential of the Bata Bual area is worth (Q) = 4 x 12.5 MWe = 50 MWe. In Kepala Madan area, based on the estimation of manifestation area, there is a wide area of prospect in Kepala Madan area about 4 km². The geothermal energy potential of the speculative level in Kepala Madan district is (Q) = 4 x 12.5 MWe = 50 MWe. These three areas are the largest geothermal potential on the island of West Buru. From the above research, it can be concluded that there is potential in West Buru Island. Further exploration is needed to find greater potential. Therefore, researchers want to explain the geothermal potential contained in the West Buru Basin, within the scope of West Buru Island. This potential can be utilized for the community of West Buru Island.Keywords: West Buru basin, West Buru island, potential, Waepo, Bata Bual, Kepala Madan
Procedia PDF Downloads 225443 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 69442 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control
Authors: Sung-Jun Yoo, Kazuhide Ito
Abstract:
In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality
Procedia PDF Downloads 361441 Effect of Phthalates on Male Infertility: Myth or Truth?
Authors: Rashmi Tomar, A. Srinivasan, Nayan K. Mohanty, Arun K. Jain
Abstract:
Phthalates have been used as additives in industrial products since the 1930s, and are universally considered to be ubiquitous environmental contaminants. The general population is exposed to phthalates through consumer products, as well as diet and medical treatments. Animal studies showing the existence of an association between some phthalates and testicular toxicity have generated public and scientific concern about the potential adverse effects of environmental changes on male reproductive health. Unprecedented declines in fertility rates and semen quality have been reported during the last half of the 20th century in developed countries and increasing interest exists on the potential relationship between exposure to environmental contaminants, including phthalates, and human male reproductive health Studies. Phthalates may be associated with altered endocrine function and adverse effects on male reproductive development and function, but human studies are limited. The aim of the present study was detection of phthalate compounds, estimation of their metabolites in infertile & fertile male. Blood and urine samples were collected from 150 infertile patients & 75 fertile volunteers recruited through Department of Urology, Safdarjung Hospital, New Delhi. Blood have been collected in separate glass tubes from the antecubital vein of the patients, serum have been separate and estimate the phthalate level in serum samples by Gas Chromatography / Mass Spectrometry using NIOSH / OSHA detailed protocol. Urine of Infertile & Fertile Subjects was collected & extracted using solid phase extraction method, analysis by HPLC. In conclusion, to the best of our knowledge the present study based on human is first to show the presence of phthalate in human serum samples and their metabolites in urine samples. Significant differences were observed between several phthalates in infertile and fertile healthy individuals.Keywords: Gas Chromatography, HPLC, male infertility, phthalates, serum, toxicity, urine
Procedia PDF Downloads 363440 Dynamic Externalities and Regional Productivity Growth: Evidence from Manufacturing Industries of India and China
Authors: Veerpal Kaur
Abstract:
The present paper aims at investigating the role of dynamic externalities of agglomeration in the regional productivity growth of manufacturing sector in India and China. Taking 2-digit level manufacturing sector data of states and provinces of India and China respectively for the period of 1998-99 to 2011-12, this paper examines the effect of dynamic externalities namely – Marshall-Arrow-Romer (MAR) specialization externalities, Jacobs’s diversity externalities, and Porter’s competition externalities on regional total factor productivity growth (TFPG) of manufacturing sector in both economies. Regressions have been carried on pooled data for all 2-digit manufacturing industries for India and China separately. The estimation of Panel has been based on a fixed effect by sector model. The results of econometric exercise show that labour-intensive industries in Indian regional manufacturing benefit from diversity externalities and capital intensive industries gain more from specialization in terms of TFPG. In China, diversity externalities and competition externalities hold better prospectus for regional TFPG in both labour intensive and capital intensive industries. But if we look at results for coastal and non-coastal region separately, specialization tends to assert a positive effect on TFPG in coastal regions whereas it has a negative effect on TFPG of coastal regions. Competition externalities put a negative effect on TFPG of non-coastal regions whereas it has a positive effect on TFPG of coastal regions. Diversity externalities made a positive contribution to TFPG in both coastal and non-coastal regions. So the results of the study postulate that the importance of dynamic externalities should not be examined by pooling all industries and all regions together. This could hold differential implications for region specific and industry-specific policy formulation. Other important variables explaining regional level TFPG in both India and China have been the availability of infrastructure, level of competitiveness, foreign direct investment, exports and geographical location of the region (especially in China).Keywords: China, dynamic externalities, India, manufacturing, productivity
Procedia PDF Downloads 123439 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection
Authors: Ali Hamza
Abstract:
Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network
Procedia PDF Downloads 84438 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City
Authors: Emejeamara Francis
Abstract:
The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content
Procedia PDF Downloads 277437 Prevalence of Elder Abuse and Effects of Social Factors on It
Authors: Ezat Vahidian, Babak Eshrati
Abstract:
Introduction: Elder abuse, a very complex issue with diverse definitions and names, has been very slow to capture the public eye and public policy since it is manifested at many levels. It requires the involvement of different types of professionals. While elder abuse is not a new phenomenon, the speed of population ageing world-wide is likely to lead to an increase in its incidence and prevalence. Elder abuse has devastating consequences for older persons such as poor quality of life, psychological distress, and loss of property and security. It is also associated with increased mortality and morbidity. Elder abuse is a problem that manifests itself in both rich and poor countries and at all levels of society. Purpose: The purpose of this study is to determine the prevalence of elder abuse and effects of social factor on it in Markazi Province. Materials and methods: The society of the study was all of the elders in Markazi Province that were available by geographical address in the table of rural and urban household societies. The study was cross sectional and multi phases in sampling the first one was classification according rural and urban area and the second one was cluster sampling with equal cluster. Estimation of samples were 472 persons and increased by design effect to 1110 persons. Collection data was done by questionnaire and analyzed by SPSS and chi 2 exam. Results: This study showed 70 persons were abused (42/8% male and 57/2% female) mean of ages was 74/7 years. 64% were marred and 31% were widows. There were not any significant meaningful association between elder abuse and area of living (pv=0.299),occupation (p.v=0.104), education (pv=0.358) and age (P.value=0.104) there were significant meaningful association between physical impairment (pv=0.08), and movement impairment (P.value=0.008). Conclusion: Results verify that maltreatment occurred in the aged persons. Analysis of data indicated that elder abuse exist in every socioeconomic group with any context of education in urban area and rural area and in men and women. Prevalence of elder abuse was 6.3% (70 persons) that verify the data of developed countries with limited sample.Keywords: elder abuse, education, occupation, area of living
Procedia PDF Downloads 403436 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 464435 Estimation of Physico-Mechanical Properties of Tuffs (Turkey) from Indirect Methods
Authors: Mustafa Gok, Sair Kahraman, Mustafa Fener
Abstract:
In rock engineering applications, determining uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), and basic index properties such as density, porosity, and water absorption is crucial for the design of both underground and surface structures. However, obtaining reliable samples for direct testing, especially from rocks that weather quickly and have low strength, is often challenging. In such cases, indirect methods provide a practical alternative to estimate the physical and mechanical properties of these rocks. In this study, tuff samples collected from the Cappadocia region (Nevşehir) in Turkey were subjected to indirect testing methods. Over 100 tests were conducted, using needle penetrometer index (NPI), point load strength index (PLI), and disc shear index (BPI) to estimate the uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), density, and water absorption index of the tuffs. The relationships between the results of these indirect tests and the target physical properties were evaluated using simple and multiple regression analyses. The findings of this research reveal strong correlations between the indirect methods and the mechanical properties of the tuffs. Both uniaxial compressive strength and Brazilian tensile strength could be accurately predicted using NPI, PLI, and BPI values. The regression models developed in this study allow for rapid, cost-effective assessments of tuff strength in cases where direct testing is impractical. These results are particularly valuable for geological engineering applications, where time and resource constraints exist. This study highlights the significance of using indirect methods as reliable predictors of the mechanical behavior of weak rocks like tuffs. Further research is recommended to explore the application of these methods to other rock types with similar characteristics. Further research is required to compare the results with those of established direct test methods.Keywords: brazilian tensile strength, disc shear strength, indirect methods, tuffs, uniaxial compressive strength
Procedia PDF Downloads 15434 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India
Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia
Abstract:
Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification
Procedia PDF Downloads 216433 Evaluation of Genetic Fidelity and Phytochemical Profiling of Micropropagated Plants of Cephalantheropsis obcordata: An Endangered Medicinal Orchid
Authors: Gargi Prasad, Ashiho A. Mao, Deepu Vijayan, S. Mandal
Abstract:
The main objective of the present study was to optimize and develop an efficient protocol for in vitro propagation of a medicinally important orchid Cephalantheropsis obcordata (Lindl.) Ormerod along with genetic stability analysis of regenerated plants. This plant has been traditionally used in Chinese folk medicine and the decoction of whole plant is known to possess anticancer activity. Nodal segments used as explants were inoculated on Murashige and Skoog (MS) medium supplemented with various concentrations of isopentenyl adenine (2iP). The rooted plants were successfully acclimatized in the greenhouse with 100% survival rate. Inter-simple sequence repeats (ISSR) markers were used to assess the genetic fidelity of in vitro raised plants and the mother plant. It was revealed that monomorphic bands showing the absence of polymorphism in all in vitro raised plantlets analyzed, confirming the genetic uniformity among the regenerants. Phytochemical analysis was done to compare the antioxidant activities and HPLC fingerprinting assay of 80% aqueous ethanol extract of the leaves and stem of in vitro and in vivo grown C. obcordata. The extracts of the plants were examined for their antioxidant activities by using free radical 1, 1-diphenyl-2-picryl hydrazyl (DPPH) scavenging method, 2,2’-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging ability, reducing power capacity, estimation of total phenolic content, flavonoid content and flavonol content. A simplified method for the detection of ascorbic acid, phenolic acids and flavonoids content was also developed by using reversed phase high-performance liquid chromatography (HPLC). This is the first report on the micropropagation, genetic integrity study and quantitative phytochemical analysis of in vitro regenerated plants of C. obcordata.Keywords: Cephalantheropsis obcordata, genetic fidelity, ISSR markers, HPLC
Procedia PDF Downloads 156432 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 299431 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 152430 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving
Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco
Abstract:
Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.Keywords: augmented reality, driving, physiological signals, test platform
Procedia PDF Downloads 141429 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring
Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon
Abstract:
We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch
Procedia PDF Downloads 185428 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 104427 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation
Authors: Alaa Hamed Salama, Rehab Nabil Shamma
Abstract:
Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization
Procedia PDF Downloads 449426 Disaster Management Supported by Unmanned Aerial Systems
Authors: Agoston Restas
Abstract:
Introduction: This paper describes many initiatives and shows also practical examples which happened recently using Unmanned Aerial Systems (UAS) to support disaster management. Since the operation of manned aircraft at disasters is usually not only expensive but often impossible to use as well, in many cases managers fail to use the aerial activity. UAS can be an alternative moreover cost-effective solution for supporting disaster management. Methods: This article uses thematic division of UAS applications; it is based on two key elements, one of them is the time flow of managing disasters, other is its tactical requirements. Logically UAS can be used like pre-disaster activity, activity immediately after the occurrence of a disaster and the activity after the primary disaster elimination. Paper faces different disasters, like dangerous material releases, floods, earthquakes, forest fires and human-induced disasters. Research used function analysis, practical experiments, mathematical formulas, economic analysis and also expert estimation. Author gathered international examples and used own experiences in this field as well. Results and discussion: An earthquake is a rapid escalating disaster, where, many times, there is no other way for a rapid damage assessment than aerial reconnaissance. For special rescue teams, the UAS application can help much in a rapid location selection, where enough place remained to survive for victims. Floods are typical for a slow onset disaster. In contrast, managing floods is a very complex and difficult task. It requires continuous monitoring of dykes, flooded and threatened areas. UAS can help managers largely keeping an area under observation. Forest fires are disasters, where the tactical application of UAS is already well developed. It can be used for fire detection, intervention monitoring and also for post-fire monitoring. In case of nuclear accident or hazardous material leakage, UAS is also a very effective or can be the only one tool for supporting disaster management. Paper shows some efforts using UAS to avoid human-induced disasters in low-income countries as part of health cooperation.Keywords: disaster management, floods, forest fires, Unmanned Aerial Systems
Procedia PDF Downloads 237425 Total Plaque Area in Chronic Renal Failure
Authors: Hernán A. Perez, Luis J. Armando, Néstor H. García
Abstract:
Background and aims Cardiovascular disease rates are very high in patients with renal failure (CRF), but the underlying mechanisms are incompletely understood. Traditional cardiovascular risk factors do not explain the increased risk, and observational studies have observed paradoxical or absent associations between classical risk factors and mortality in dialysis patients. A large randomized controlled trial, the 4D Study, the AURORA and the ALERT study found that statin therapy in CRF do not reduce cardiovascular events. These results may be the results of ‘accelerated atherosclerosis’ observed on these patients. The objective of this study was to investigate if carotid total plaque area (TPA), a measure of carotid plaque burden growth is increased at progressively lower creatinine clearance in patients with CRF. We studied a cohort of patients with CRF not on dialysis, reasoning that risk factor associations might be more easily discerned before end stage renal disease. Methods: The Blossom DMO Argentina ethics committee approved the study and informed consent from each participant was obtained. We performed a cohort study in 412 patients with Stage 1, 2 and 3 CRF. Clinical and laboratory data were obtained. TPA was determined using bilateral carotid ultrasonography. Modification of Diet in Renal Disease estimation formula was used to determine renal function. ANOVA was used when appropriate. Results: Stage 1 CRF group (n= 16, 43±2yo) had a blood pressure of 123±2/78±2 mmHg, BMI 30±1, LDL col 145±10 mg/dl, HbA1c 5.8±0.4% and had the lowest TPA 25.8±6.9 mm2. Stage 2 CRF (n=231, 50±1 yo) had a blood pressure of 132±1/81±1 mmHg, LDL col 125±2 mg/dl, HbA1c 6±0.1% and TPA 48±10mm2 ( p< 0.05 vs CRF stage 1) while Stage 3 CRF (n=165, 59±1 yo) had a blood pressure of 134±1/81±1, LDL col 125±3 mg/dl, HbA1c 6±0.1% and TPA 71±6mm2 (p < 0.05 vs CRF stage 1 and 2). Conclusion: Our data indicate that TPA increases along the renal function deterioration, and it is not related with the LDL cholesterol and triglycerides levels. We suggest that mechanisms other than the classics are responsible for the observed excess of cardiovascular disease in CKD patients and finally, determination of total plaque area should be used to measure effects of antiatherosclerotic therapy.Keywords: hypertension, chronic renal failure, atherosclerosis, cholesterol
Procedia PDF Downloads 271424 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74423 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 530422 Applications of Hyperspectral Remote Sensing: A Commercial Perspective
Authors: Tuba Zahra, Aakash Parekh
Abstract:
Hyperspectral remote sensing refers to imaging of objects or materials in narrow conspicuous spectral bands. Hyperspectral images (HSI) enable the extraction of spectral signatures for objects or materials observed. These images contain information about the reflectance of each pixel across the electromagnetic spectrum. It enables the acquisition of data simultaneously in hundreds of spectral bands with narrow bandwidths and can provide detailed contiguous spectral curves that traditional multispectral sensors cannot offer. The contiguous, narrow bandwidth of hyperspectral data facilitates the detailed surveying of Earth's surface features. This would otherwise not be possible with the relatively coarse bandwidths acquired by other types of imaging sensors. Hyperspectral imaging provides significantly higher spectral and spatial resolution. There are several use cases that represent the commercial applications of hyperspectral remote sensing. Each use case represents just one of the ways that hyperspectral satellite imagery can support operational efficiency in the respective vertical. There are some use cases that are specific to VNIR bands, while others are specific to SWIR bands. This paper discusses the different commercially viable use cases that are significant for HSI application areas, such as agriculture, mining, oil and gas, defense, environment, and climate, to name a few. Theoretically, there is n number of use cases for each of the application areas, but an attempt has been made to streamline the use cases depending upon economic feasibility and commercial viability and present a review of literature from this perspective. Some of the specific use cases with respect to agriculture are crop species (sub variety) detection, soil health mapping, pre-symptomatic crop disease detection, invasive species detection, crop condition optimization, yield estimation, and supply chain monitoring at scale. Similarly, each of the industry verticals has a specific commercially viable use case that is discussed in the paper in detail.Keywords: agriculture, mining, oil and gas, defense, environment and climate, hyperspectral, VNIR, SWIR
Procedia PDF Downloads 79