Search results for: sensor pattern noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4903

Search results for: sensor pattern noise

493 Lithological Mapping and Iron Deposits Identification in El-Bahariya Depression, Western Desert, Egypt, Using Remote Sensing Data Analysis

Authors: Safaa M. Hassan; Safwat S. Gabr, Mohamed F. Sadek

Abstract:

This study is proposed for the lithological and iron oxides detection in the old mine areas of El-Bahariya Depression, Western Desert, using ASTER and Landsat-8 remote sensing data. Four old iron ore occurrences, namely; El-Gedida, El-Haraa, Ghurabi, and Nasir mine areas found in the El-Bahariya area. This study aims to find new high potential areas for iron mineralization around El-Baharyia depression. Image processing methods such as principle component analysis (PCA) and band ratios (b4/b5, b5/b6, b6/b7, and 4/2, 6/7, band 6) images were used for lithological identification/mapping that includes the iron content in the investigated area. ASTER and Landsat-8 visible and short-wave infrared data found to help mapping the ferruginous sandstones, iron oxides as well as the clay minerals in and around the old mines area of El-Bahariya depression. Landsat-8 band ratio and the principle component of this study showed well distribution of the lithological units, especially ferruginous sandstones and iron zones (hematite and limonite) along with detection of probable high potential areas for iron mineralization which can be used in the future and proved the ability of Landsat-8 and ASTER data in mapping these features. Minimum Noise Fraction (MNF), Mixture Tuned Matched Filtering (MTMF), pixel purity index methods as well as Spectral Ange Mapper classifier algorithm have been successfully discriminated the hematite and limonite content within the iron zones in the study area. Various ASTER image spectra and ASD field spectra of hematite and limonite and the surrounding rocks are compared and found to be consistent in terms of the presence of absorption features at range from 1.95 to 2.3 μm for hematite and limonite. Pixel purity index algorithm and two sub-pixel spectral methods, namely Mixture Tuned Matched Filtering (MTMF) and matched filtering (MF) methods, are applied to ASTER bands to delineate iron oxides (hematite and limonite) rich zones within the rock units. The results are validated in the field by comparing image spectra of spectrally anomalous zone with the USGS resampled laboratory spectra of hematite and limonite samples using ASD measurements. A number of iron oxides rich zones in addition to the main surface exposures of the El-Gadidah Mine, are confirmed in the field. The proposed method is a successful application of spectral mapping of iron oxides deposits in the exposed rock units (i.e., ferruginous sandstone) and present approach of both ASTER and ASD hyperspectral data processing can be used to delineate iron-rich zones occurring within similar geological provinces in any parts of the world.

Keywords: Landsat-8, ASTER, lithological mapping, iron exploration, western desert

Procedia PDF Downloads 135
492 Understanding the Coping Experience of Mothers with Childhood Trauma Histories: A Qualitative Study

Authors: Chan Yan Nok

Abstract:

The present study is a qualitative study based on the coping experiences of six Hong Kong Chinese mothers who had childhood trauma from their first-person perspective. Expanding the perspective beyond the dominant discourse of “inter-generation transmission of trauma”, this study explores the experiences and meanings of child trauma embedded in their narratives through the process of thematic analysis and narrative analysis. The interviewees painted a nuanced picture of their process of coping and trauma resolution. First, acknowledgement; second, feel safe and start to tell the story of trauma; third, feel the feelings and expression of emotions; fourth, clarifying and coping with the impacts of trauma; fifth, integration and transformation; and sixth, using their new understanding of experience to have a better life. It was seen that there was no “end” within the process of trauma resolution. Instead, this is an ongoing process with positive healing trajectory. Analysis of the stories of the mothers revealed recurrent themes around continuous self-reflective awareness in the process of trauma coping. Rather than being necessarily negative and detrimental, childhood trauma could highlight the meanings of being a mother and reveal opportunities for continuous personal growth and self-enhancement. Utilizing the sense of inadequacy as a core driver in the trauma recovery process while developing a heightened awareness of the unfinished business embedded in their “automatic pattern” of behaviors, emotions, and thoughts can help these mothers become more flexible to formulate new methods in facing future predicaments. Future social work and parent education practices should help mothers deal with unresolved trauma, make sense of their impacts of childhood trauma and discover the growth embedded in the past traumatic experience. They should be facilitated in “acknowledging the reality of the trauma”, including understanding their complicated emotions arising from the traumatic experiences and voicing their struggles. In addition, helping these mothers to be aware of short-term and long-term trauma impacts (i.e., secondary responses to the trauma) and explore their effective coping strategies in “overcoming secondary responses to the trauma” are crucial for their future positive adjustment and transformation. Through affirming their coping abilities and lessons learnt from past experiences, mothers can reduce feelings of shame and powerlessness and enhance their parental capacity.

Keywords: childhood trauma, coping, mothers, self-awareness, self-reflection, trauma resolution

Procedia PDF Downloads 154
491 A Method for Evaluating Gender Equity of Cycling from Rawls Justice Perspective

Authors: Zahra Hamidi

Abstract:

Promoting cycling, as an affordable environmentally friendly mode of transport to replace private car use has been central to sustainable transport policies. Cycling is faster than walking and combined with public transport has the potential to extend the opportunities that people can access. In other words, cycling, besides direct positive health impacts, can improve people mobility and ultimately their quality of life. Transport literature well supports the close relationship between mobility, quality of life, and, well being. At the same time inequity in the distribution of access and mobility has been associated with the key aspects of injustice and social exclusion. The pattern of social exclusion and inequality in access are also often related to population characteristics such as age, gender, income, health, and ethnic background. Therefore, while investing in transport infrastructure it is important to consider the equity of provided access for different population groups. This paper proposes a method to evaluate the equity of cycling in a city from Rawls egalitarian perspective. Since this perspective is concerned with the difference between individuals and social groups, this method combines accessibility measures and Theil index of inequality that allows capturing the inequalities ‘within’ and ‘between’ groups. The paper specifically focuses on two population characteristics as gender and ethnic background. Following Rawls equity principles, this paper measures accessibility by bikes to a selection of urban activities that can be linked to the concept of the social primary goods. Moreover, as growing number of cities around the world have launched bike-sharing systems (BSS) this paper incorporates both private and public bikes networks in the estimation of accessibility levels. Additionally, the typology of bike lanes (separated from or shared with roads), the presence of a bike sharing system in the network, as well as bike facilities (e.g. parking racks) have been included in the developed accessibility measures. Application of this proposed method to a real case study, the city of Malmö, Sweden, shows its effectiveness and efficiency. Although the accessibility levels were estimated only based on gender and ethnic background characteristics of the population, the author suggests that the analysis can be applied to other contexts and further developed using other properties, such as age, income, or health.

Keywords: accessibility, cycling, equity, gender

Procedia PDF Downloads 400
490 Usability Testing on Information Design through Single-Lens Wearable Device

Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk

Abstract:

This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.

Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device

Procedia PDF Downloads 267
489 Research on the Planning Spatial Mode of China's Overseas Industrial Park

Authors: Sidong Zhao, Xingping Wang

Abstract:

Recently, the government of China has provided strong support the developments of overseas industrial parks. The global distribution of China overseas industrial parks has gradually moved from the 'sparks of fire' to the 'prairie fires.' The support and distribution have promoted developing overseas industrial parks to a strategy of constructing a China's new open economic system and a typical representative of the 'Chinese wisdom' and the 'China's plans' that China has contributed to the globalization of the new era under the initiative of the Belt and Road. As the industrial parks are the basis of 'work/employment', a basic function of a city (Athens Constitution), planning for developments of industrial parks has become a long-term focus of urban planning. Based on the research of the planning and the analysis of the present developments of some typical China overseas industrial parks, we found some interesting rules: First, large numbers of the China overseas industrial parks are located in less developed countries. These industrial parks have become significant drives of the developments of the host cities and even the regions in those countries, especially in investment, employment and paid tax fee for the local, etc. so, the planning and development of overseas industrial parks have received extensive attention. Second, there are some problems in the small part of the overseas Park, such as the planning of the park not following the planning of the host city and lack of implementation of the park planning, etc. These problems have led to the difficulties of the implementation of the planning and the sustainable developments of the parks. Third, a unique pattern of space development has been formed. in the dimension of the patterns of regional spatial distribution, there are five characteristics - along with the coast, along the river, along with the main traffic lines and hubs, along with the central urban area and along the connections of regions economic. In the dimension of the spatial relationships between the industrial park and the city, there is a growing and evolving trend as 'separation – integration - union'. In the dimension of spatial mode of the industrial parks, there are different patterns of development, such as a specialized industrial park, complex industrial park, characteristic town and new urban area of industry, etc. From the perspective of the trends of the developments and spatial modes, in the future, the planning of China overseas industrial parks should emphasize the idea of 'building a city based on the industrial park'. In other words, it's making the developments of China overseas industrial parks move from 'driven by policy' to 'driven by the functions of the city', accelerating forming the system of China overseas industrial parks and integrating the industrial parks and the cities.

Keywords: overseas industrial park, spatial mode, planning, China

Procedia PDF Downloads 192
488 The Relationship between Celebrity Worship and Religiosity: A Study in Turkish Context

Authors: Saadet Taşyürek Demirel, Halide Sena Koçyiğit, Rümeysa Fatma Çetin

Abstract:

Celebrity worship, characterized by excessive admiration and devotion towards public figures, often mirrors elements of religious fervor. This study delves into the intricate connection between celebrity worship and religiosity, particularly within the Turkish cultural context, where Islamic values predominantly shape societal norms. The investigation involves the adaptation of the Celebrity Attitude Scale into Turkish and scrutinizes the interplay between young individuals' religiosity and their extreme adulation of celebrities. Additionally, the study explores potential moderating factors, such as age and gender, that might influence this relationship. A cohort of 197 young adults, aged 19 to 30, participated in this research, responding to self-administered questionnaires that assessed their attitudes towards celebrities using the adapted Celebrity Attitude Scale, along with their self-reported religiosity. The anticipated relationship between religiosity and celebrity worship is hypothesized to exhibit a non-linear pattern. Specifically, we expect religiosity to positively predict celebrity worship tendencies among individuals with minimal to moderate religiosity levels. Conversely, a negative association between religiosity and celebrity worship is expected to manifest among participants exhibiting moderate to high levels of religiosity. The findings of this study will contribute to the comprehension of the intricate dynamics between celebrity worship and religiosity, offering insights specifically within the Turkish cultural context. By shedding light on this relationship, the study aims to enhance our understanding of the multifaceted influences that shape individuals' perceptions and behaviors towards both celebrities and religious inclinations. Methodology of the study: A quantitative research will be conducted, where the factor analysis and correlational method will be used. The factor structure of the scale will be determined with exploratory and confirmatory factor analysis. The reliability, internal consistency, Objectives of the study: This study examines the relationship between religiosity and celebrity worship by young adults in the Turkish context. The other aim of the study is to assess the Turkish validity and reliability of the Celebrity Attitude Scale and contribute it to the literature. Main Contributions of the study: The study aims to introduce celebrity worship to Turkish literature, assess the Celebrity Attitude Scale's reliability in a Turkish sample, explore manifestations of celebrity worship, and examine its link to religiosity. This research addresses the lack of Turkish sources on celebrity worship and extends understanding of the concept.

Keywords: celebrity, worship, religiosity, god

Procedia PDF Downloads 75
487 Religious Fundamentalism Prescribes Requirements for Marriage and Reproduction

Authors: Steven M. Graham, Anne V. Magee

Abstract:

Most world religions have sacred texts and traditions that provide instruction about and definitions of marriage, family, and family duties and responsibilities. Given that religious fundamentalism (RF) is defined as the belief that these sacred texts and traditions are literally and completely true to the exclusion of other teachings, RF should be predictive of the attitudes one holds about these topics. The goals of the present research were to: (1) explore the extent to which people think that men and women can be happy without marriage, a significant sexual relationship, a long-term romantic relationship, and having children; (2) determine the extent to which RF is associated with these beliefs; and, (3) to determine how RF is associated with considering certain elements of a relationship to be necessary for thinking of that relationship as a marriage. In Study 1, participants completed a reliable and valid measure of RF and answered questions about the necessity of various elements for a happy life. Higher RF scores were associated with the belief that both men and women require marriage, a sexual relationship, a long-term romantic relationship, and children in order to have a happy life. In Study 2, participants completed these same measures and the pattern of results replicated when controlling for overall religiosity. That is, RF predicted these beliefs over and above religiosity. Additionally, participants indicated the extent to which a variety of characteristics were necessary to consider a particular relationship to be a marriage. Controlling for overall religiosity, higher RF scores were associated with the belief that the following were required to consider a relationship a marriage: religious sanctification, a sexual component, sexual monogamy, emotional monogamy, family approval, children (or the intent to have them), cohabitation, and shared finances. Interestingly, and unexpectedly, higher RF scores were correlated with less importance placed on mutual consent in order to consider a relationship a marriage. RF scores were uncorrelated with the importance placed on legal recognition or lifelong commitment and these null findings do not appear to be attributable to ceiling effects or lack of variability. These results suggest that RF constrains views about both the importance of marriage and family in one’s life and also the characteristics required to consider a relationship a proper marriage. This could have implications for the mental and physical health of believers high in RF, either positive or negative, depending upon the extent to which their lives correspond to these templates prescribed by RF. Additionally, some of these correlations with RF were substantial enough (> .70) that the relevant items could serve as a brief, unobtrusive measure of RF. Future research will investigate these possibilities.

Keywords: attitudes about marriage, fertility intentions, measurement, religious fundamentalism

Procedia PDF Downloads 115
486 Inter-Complex Dependence of Production Technique and Preforms Construction on the Failure Pattern of Multilayer Homo-Polymer Composites

Authors: Ashraf Nawaz Khan, R. Alagirusamy, Apurba Das, Puneet Mahajan

Abstract:

The thermoplastic-based fibre composites are acquiring a market sector of conventional as well as thermoset composites. However, replacing the thermoset with a thermoplastic composite has never been an easy task. The inherent high viscosity of thermoplastic resin reveals poor interface properties. In this work, a homo-polymer towpreg is produced through an electrostatic powder spray coating methodology. The produced flexible towpreg offers a low melt-flow distance during the consolidation of the laminate. The reduced melt-flow distance demonstrates a homogeneous fibre/matrix distribution (and low void content) on consolidation. The composite laminate has been fabricated with two manufacturing techniques such as conventional film stack (FS) and powder-coated (PC) technique. This helps in understanding the distinct response of produced laminates on applying load since the laminates produced through the two techniques are comprised of the same constituent fibre and matrix (constant fibre volume fraction). The changed behaviour is observed mainly due to the different fibre/matrix configurations within the laminate. The interface adhesion influences the load transfer between the fibre and matrix. Therefore, it influences the elastic, plastic, and failure patterns of the laminates. Moreover, the effect of preform geometries (plain weave and satin weave structure) are also studied for corresponding composite laminates in terms of various mechanical properties. The fracture analysis is carried out to study the effect of resin at the interlacement points through micro-CT analysis. The PC laminate reveals a considerably small matrix-rich and deficient zone in comparison to the FS laminate. The different load tensile, shear, fracture toughness, and drop weight impact test) is applied to the laminates, and corresponding damage behaviour is analysed in the successive stage of failure. The PC composite has shown superior mechanical properties in comparison to the FS composite. The damage that occurs in the laminate is captured through the SEM analysis to identify the prominent mode of failure, such as matrix cracking, fibre breakage, delamination, debonding, and other phenomena.

Keywords: composite, damage, fibre, manufacturing

Procedia PDF Downloads 133
485 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal

Authors: Anindita Patra, Prasad K. Bhaskaran

Abstract:

The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.

Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves

Procedia PDF Downloads 224
484 A Double-Blind, Randomized, Controlled Trial on N-Acetylcysteine for the Prevention of Acute Kidney Injury in Patients Undergoing Allogeneic Hematopoietic Stem Cell Transplantation

Authors: Sara Ataei, Molouk Hadjibabaie, Amirhossein Moslehi, Maryam Taghizadeh-Ghehi, Asieh Ashouri, Elham Amini, Kheirollah Gholami, Alireza Hayatshahi, Mohammad Vaezi, Ardeshir Ghavamzadeh

Abstract:

Acute kidney injury (AKI) is one of the complications of hematopoietic stem cell transplantation and is associated with increased mortality. N-acetylcysteine (NAC) is a thiol compound with antioxidant and vasodilatory properties that has been investigated for the prevention of AKI in several clinical settings. In the present study, we evaluated the effects of intravenous NAC on the prevention of AKI in allogeneic hematopoietic stem cell transplantation patients. A double-blind randomized placebo-controlled trial was conducted, and 80 patients were recruited to receive 100 mg/kg/day NAC or placebo as intermittent intravenous infusion from day -6 to day +15. AKI was determined on the basis of the Risk-Injury-Failure-Loss-Endstage renal disease and AKI Network criteria as the primary outcome. We assessed urine neutrophil gelatinase-associated lipocalin (uNGAL) on days -6, -3, +3, +9, and +15 as the secondary outcome. Moreover, transplant-related outcomes and NAC adverse reactions were evaluated during the study period. Statistical analysis was performed using appropriate parametric and non-parametric methods including Kaplan–Meier for AKI and generalized estimating equation for uNGAL. At the end of the trial, data from 72 patients were analyzed (NAC: 33 patients and placebo: 39 patients). Participants of each group were not different considering baseline characteristics. AKI was observed in 18% of NAC recipients and 15% of placebo group patients, and the occurrence pattern was not significantly different (p = 0.73). Moreover, no significant difference was observed between groups for uNGAL measures (p = 0.10). Transplant-related outcomes were similar for both groups, and all patients had successful engraftment. Three patients did not tolerate NAC because of abdominal pain, shortness of breath and rash with pruritus and were dropped from the intervention group before transplantation. However, the frequency of adverse reactions was not significantly different between groups. In conclusion, our findings could not show any clinical benefits from high-dose NAC particularly for AKI prevention in allogeneic hematopoietic stem cell transplantation patients.

Keywords: acute kidney injury, N-acetylcysteine, hematopoietic stem cell transplantation, urine neutrophil gelatinase-associated lipocalin, randomized controlled trial

Procedia PDF Downloads 429
483 Development of a Multi-User Country Specific Food Composition Table for Malawi

Authors: Averalda van Graan, Joelaine Chetty, Malory Links, Agness Mwangwela, Sitilitha Masangwi, Dalitso Chimwala, Shiban Ghosh, Elizabeth Marino-Costello

Abstract:

Food composition data is becoming increasingly important as dealing with food insecurity and malnutrition in its persistent form of under-nutrition is now coupled with increasing over-nutrition and its related ailments in the developing world, of which Malawi is not spared. In the absence of a food composition database (FCDB) inherent to our dietary patterns, efforts were made to develop a country-specific FCDB for nutrition practice, research, and programming. The main objective was to develop a multi-user, country-specific food composition database, and table from existing published and unpublished scientific literature. A multi-phased approach guided by the project framework was employed. Phase 1 comprised a scoping mission to assess the nutrition landscape for compilation activities. Phase 2 involved training of a compiler and data collection from various sources, primarily; institutional libraries, online databases, and food industry nutrient data. Phase 3 subsumed evaluation and compilation of data using FAO and IN FOODS standards and guidelines. Phase 4 concluded the process with quality assurance. 316 Malawian food items categorized into eight food groups for 42 components were captured. The majority were from the baby food group (27%), followed by a staple (22%) and animal (22%) food group. Fats and oils consisted the least number of food items (2%), followed by fruits (6%). Proximate values are well represented; however, the percent missing data is huge for some components, including Se 68%, I 75%, Vitamin A 42%, and lipid profile; saturated fat 53%, mono-saturated fat 59%, poly-saturated fat 59% and cholesterol 56%. A multi-phased approach following the project framework led to the development of the first Malawian FCDB and table. The table reflects inherent Malawian dietary patterns and nutritional concerns. The FCDB can be used by various professionals in nutrition and health. Rising over-nutrition, NCD, and changing diets challenge us for nutrient profiles of processed foods and complete lipid profiles.

Keywords: analytical data, dietary pattern, food composition data, multi-phased approach

Procedia PDF Downloads 86
482 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 226
481 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 251
480 Impact of Short-Term Drought on Vegetation Health Condition in the Kingdom of Saudi Arabia Using Space Data

Authors: E. Ghoneim, C. Narron, I. Iqbal, I. Hassan, E. Hammam

Abstract:

The scarcity of water is becoming a more prominent threat, especially in areas that are already arid in nature. Although the Kingdom of Saudi Arabia (KSA) is an arid country, its southwestern region offers a high variety of botanical landscapes, many of which are wooded forests, while the eastern and northern regions offer large areas of groundwater irrigated farmlands. At present, some parts of KSA, including forests and farmlands, have witnessed protracted and severe drought due to change in rainfall pattern as a result of global climate change. Such prolonged drought that last for several consecutive years is expected to cause deterioration of forested and pastured lands as well as cause crop failure in the KSA (e.g., wheat yield). An analysis to determine vegetation drought vulnerability and severity during the growing season (September-April) over a fourteen year period (2000-2014) in KSA was conducted using MODIS Terra imagery. The Vegetation Condition Index (VCI), derived from the Normalized Difference Vegetation Index (NDVI), and the Temperature Condition Index (TCI), derived from the Land Surface Temperature (LST) data was extracted from MODIS Terra Images. The VCI and TCI were then combined to compute the Vegetation Health Index (VHI). The VHI revealed the overall vegetation health for the area under investigation. A preliminary outcome of the modeled VHI over KSA, using averaged monthly vegetation data over a 14-year period, revealed that the vegetation health condition is deteriorating over time in both naturally vegetated areas and irrigated farmlands. The derived drought map for KSA indicates that both extreme and severe drought occurrences have considerably increased over the same study period. Moreover, based on the cumulative average of drought frequency in each governorate of KSA it was determined that Makkah and Jizan governorates to the east and southwest, witness the most frequency of extreme drought, whereas Tabuk to the northwest, exhibits the less extreme drought frequency. Areas where drought is extreme or severe would most likely have negative influences on agriculture, ecosystems, tourism, and even human welfare. With the drought risk map the kingdom could make informed land management decisions including were to continue with agricultural endeavors and protect forested areas and even where to develop new settlements.

Keywords: drought, vegetation health condition, TCI, Saudi Arabia

Procedia PDF Downloads 378
479 Meniere's Disease and its Prevalence, Symptoms, Risk Factors and Associated Treatment Solutions for this Disease

Authors: Amirreza Razzaghipour Sorkhab

Abstract:

One of the most common disorders among humans is hearing impairment. This paper provides an evidence base that recovers understanding of Meniere’s disease and highlights the physical and mental health correlates of the disorder. Meniere's disease is more common in the elderly. The term idiopathic endolymphatic hydrops has been attributed to this disease by some in the previous. Meniere’s disease demonstrations a genetic tendency, and a family history is found in 10% of cases, with an autosomal dominant inheritance pattern. The COCH gene may be one of the hereditary factors contributing to Meniere’s disease, and the possibility of a COCH mutation should be considered in patients with Meniere’s disease symptoms. Should be considered Missense mutations in the COCH gene cause the autosomal dominant sensorineural hearing loss and vestibular disorder. Meniere’s disease is a complex, heterogeneous disorder of the inner ear and that is characterized by episodes of vertigo lasting from minutes to hours, fluctuating sensorineural hearing loss, tinnitus, and aural fullness. The existing evidence supports the suggestion that age and sleep disorder are risk factors for Meniere's disease. Many factors have been reported to precipitate the progress of Menier, including endolymphatic hydrops, immunology, viral infection, inheritance, vestibular migraine, and altered intra-labyrinthine fluid dynamics. Although there is currently no treatment that has a proven helpful effect on hearing levels or on the long-term evolution of the disease, however, in the primary stages, the hearing may improve among attacks, but a permanent hearing loss occurs in the majority of cases. Current publications have proposed a role for the intratympanic use of medicine, mostly aminoglycosides, for the control of vertigo. more than 85% of patients with Meniere's disease are helped by either changes in lifestyle and medical treatment or minimally aggressive surgical procedures such as intratympanic steroid therapy, intratympanic gentamicin therapy, and endolymphatic sac surgery. However, unilateral vestibular extirpation methods (intratympanic gentamicin, vestibular nerve section, or labyrinthectomy) are more predictable but invasive approaches to control the vertigo attacks. Medical therapy aimed at reducing endolymph volume, such as low-sodium diet, diuretic use, is the typical initial treatment.

Keywords: meniere's disease, endolymphatic hydrops, hearing loss, vertigo, tinnitus, COCH gene

Procedia PDF Downloads 85
478 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 154
477 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 245
476 Molecular Implication of Interaction of Human Enteric Pathogens with Phylloplane of Tomato

Authors: Shilpi, Indu Gaur, Neha Bhadauria, Susmita Goswami, Prabir K. Paul

Abstract:

Cultivation and consumption of organically grown fruits and vegetables have increased by several folds. However, the presence of Human Enteric Pathogens on the surface of organically grown vegetables causing Gastro-intestinal diseases, are most likely due to contaminated water and fecal matter of farm animals. Human Enteric Pathogens are adapted to colonize the human gut, and also colonize plant surface. Microbes on plant surface communicate with each other to establish quorum sensing. The cross talk study is important because the enteric pathogens on phylloplane have been reported to mask the beneficial resident bacteria of plant. In the present study, HEPs and bacterial colonizers were identified using 16s rRNA sequencing. Microbial colonization patterns after interaction between Human Enteric Pathogens and natural bacterial residents on tomato phylloplane was studied. Tomato plants raised under aseptic conditions were inoculated with a mixture of Serratia fonticola and Klebsiella pneumoniae. The molecules involved in cross-talk between Human Enteric Pathogens and regular bacterial colonizers were isolated and identified using molecular techniques and HPLC. The colonization pattern was studied by leaf imprint method after 48 hours of incubation. The associated protein-protein interaction in the host cytoplasm was studied by use of crosslinkers. From treated leaves the crosstalk molecules and interaction proteins were separated on 1D SDS-PAGE and analyzed by MALDI-TOF-TOF analysis. The study is critical in understanding the molecular aspects of HEP’s adaption to phylloplane. The study revealed human enteric pathogens aggressively interact among themselves and resident bacteria. HEPs induced establishment of a signaling cascade through protein-protein interaction in the host cytoplasm. The study revealed that the adaptation of Human Enteric Pathogens on phylloplane of Solanum lycopersicum involves the establishment of complex molecular interaction between the microbe and the host including microbe-microbe interaction leading to an establishment of quorum sensing. The outcome will help in minimizing the HEP load on fresh farm produce, thereby curtailing incidences of food-borne diseases.

Keywords: crosslinkers, human enteric pathogens (HEPs), phylloplane, quorum sensing

Procedia PDF Downloads 272
475 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 163
474 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 67
473 Integrated Geophysical Approach for Subsurface Delineation in Srinagar, Uttarakhand, India

Authors: Pradeep Kumar Singh Chauhan, Gayatri Devi, Zamir Ahmad, Komal Chauhan, Abha Mittal

Abstract:

The application of geophysical methods to study the subsurface profile for site investigation is becoming popular globally. These methods are non-destructive and provide the image of subsurface at shallow depths. Seismic refraction method is one of the most common and efficient method being used for civil engineering site investigations particularly for knowing the seismic velocity of the subsurface layers. Resistivity imaging technique is a geo-electrical method used to image the subsurface, water bearing zone, bedrock and layer thickness. Integrated approach combining seismic refraction and 2-D resistivity imaging will provide a better and reliable picture of the subsurface. These are economical and less time-consuming field survey which provide high resolution image of the subsurface. Geophysical surveys carried out in this study include seismic refraction and 2D resistivity imaging method for delineation of sub-surface strata in different parts of Srinagar, Garhwal Himalaya, India. The aim of this survey was to map the shallow subsurface in terms of geological and geophysical properties mainly P-wave velocity, resistivity, layer thickness, and lithology of the area. Both sides of the river, Alaknanda which flows through the centre of the city, have been covered by taking two profiles on each side using both methods. Seismic and electrical surveys were carried out at the same locations to complement the results of each other. The seismic refraction survey was carried out using ABEM TeraLoc 24 channel Seismograph and 2D resistivity imaging was performed using ABEM Terrameter LS equipment. The results show three distinct layers on both sides of the river up to the depth of 20 m. The subsurface is divided into three distinct layers namely, alluvium extending up to, 3 m depth, conglomerate zone lying between the depth of 3 m to 15 m, and compacted pebbles and cobbles beyond 15 m. P-wave velocity in top layer is found in the range of 400 – 600 m/s, in second layer it varies from 700 – 1100 m/s and in the third layer it is 1500 – 3300 m/s. The resistivity results also show similar pattern and were in good agreement with seismic refraction results. The results obtained in this study were validated with an available exposed river scar at one site. The study established the efficacy of geophysical methods for subsurface investigations.

Keywords: 2D resistivity imaging, P-wave velocity, seismic refraction survey, subsurface

Procedia PDF Downloads 249
472 Reducing Later Life Loneliness: A Systematic Literature Review of Loneliness Interventions

Authors: Dhruv Sharma, Lynne Blair, Stephen Clune

Abstract:

Later life loneliness is a social issue that is increasing alongside an upward global population trend. As a society, one way that we have responded to this social challenge is through developing non-pharmacological interventions such as befriending services, activity clubs, meet-ups, etc. Through a systematic literature review, this paper suggests that currently there is an underrepresentation of radical innovation, and underutilization of digital technologies in developing loneliness interventions for older adults. This paper examines intervention studies that were published in English language, within peer reviewed journals between January 2005 and December 2014 across 4 electronic databases. In addition to academic databases, interventions found in grey literature in the form of websites, blogs, and Twitter were also included in the overall review. This approach yielded 129 interventions that were included in the study. A systematic approach allowed the minimization of any bias dictating the selection of interventions to study. A coding strategy based on a pattern analysis approach was devised to be able to compare and contrast the loneliness interventions. Firstly, interventions were categorized on the basis of their objective to identify whether they were preventative, supportive, or remedial in nature. Secondly, depending on their scope, they were categorized as one-to-one, community-based, or group based. It was also ascertained whether interventions represented an improvement, an incremental innovation, a major advance or a radical departure, in comparison to the most basic form of a loneliness intervention. Finally, interventions were also assessed on the basis of the extent to which they utilized digital technologies. Individual visualizations representing the four levels of coding were created for each intervention, followed by an aggregated visual to facilitate analysis. To keep the inquiry within scope and to present a coherent view of the findings, the analysis was primarily concerned the level of innovation, and the use of digital technologies. This analysis highlights a weak but positive correlation between the level of innovation and the use of digital technologies in designing and deploying loneliness interventions, and also emphasizes how certain existing interventions could be tweaked to enable their migration from representing incremental innovation to radical innovation for example. This analysis also points out the value of including grey literature, especially from Twitter, in systematic literature reviews to get a contemporary view of latest work in the area under investigation.

Keywords: ageing, loneliness, innovation, digital

Procedia PDF Downloads 116
471 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (AST-to-PLT, APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. Recently, they have found clinical use as measures of liver fibrosis and cardiovascular risk. However, their status in children has not been evaluated in detail yet. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of sixty-eight children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, thirty-five children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised thirty-three OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p<0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r=0.336 and p=0.065 in the N-BMI group. On the other hand, they were r=0.707 and p=0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p<0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. Triglycerides are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. Triglycerides are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs FIB-4 correlations in N-BMI and OB groups (no correlation versus high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.

Keywords: APRI, children, FIB-4, obesity, triglycerides

Procedia PDF Downloads 342
470 Data Calibration of the Actual versus the Theoretical Micro Electro Mechanical Systems (MEMS) Based Accelerometer Reading through Remote Monitoring of Padre Jacinto Zamora Flyover

Authors: John Mark Payawal, Francis Aldrine Uy, John Paul Carreon

Abstract:

This paper shows the application of Structural Health Monitoring, SHM into bridges. Bridges are structures built to provide passage over a physical obstruction such as rivers, chasms or roads. The Philippines has a total of 8,166 national bridges as published on the 2015 atlas of the Department of Public Works and Highways (DPWH) and only 2,924 or 35.81% of these bridges are in good condition. As a result, PHP 30.464 billion of the 2016 budget of DPWH is allocated on roads and/or bridges maintenance alone. Intensive spending is owed to the present practice of outdated manual inspection and assessment, and poor structural health monitoring of Philippine infrastructures. As the School of Civil, Environmental, & Geological Engineering of Mapua Institute of Technology (MIT) continuous its well driven passion in research based projects, a partnership with the Department of Science and Technology (DOST) and the DPWH launched the application of Structural Health Monitoring, (SHM) in Padre Jacinto Zamora Flyover. The flyover is located along Nagtahan Boulevard in Sta. Mesa, Manila that connects Brgy. 411 and Brgy. 635. It gives service to vehicles going from Lacson Avenue to Mabini Bridge passing over Legarda Flyover. The flyover is chosen among the many located bridges in Metro Manila as the focus of the pilot testing due to its site accessibility, and complete structural built plans and specifications necessary for SHM as provided by the Bureau of Design, BOD department of DPWH. This paper focuses on providing a method to calibrate theoretical readings from STAAD Vi8 Pro and sync the data to actual MEMS accelerometer readings. It is observed that while the design standards used in constructing the flyover was reflected on the model, actual readings of MEMS accelerometer display a large difference compared to the theoretical data ran and taken from STAAD Vi8 Pro. In achieving a true seismic response of the modeled bridge or hence syncing the theoretical data to the actual sensor reading also called as the independent variable of this paper, analysis using single degree of freedom (SDOF) of the flyover under free vibration without damping using STAAD Vi8 Pro is done. The earthquake excitation and bridge responses are subjected to earthquake ground motion in the form of ground acceleration or Peak Ground Acceleration, PGA. Translational acceleration load is used to simulate the ground motion of the time history analysis acceleration record in STAAD Vi8 Pro.

Keywords: accelerometer, analysis using single degree of freedom, micro electro mechanical system, peak ground acceleration, structural health monitoring

Procedia PDF Downloads 315
469 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 133
468 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 352
467 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 68
466 Thermal Properties and Water Vapor Permeability for Cellulose-Based Materials

Authors: Stanislavs Gendelis, Maris Sinka, Andris Jakovics

Abstract:

Insulation materials made from natural sources have become more popular for the ecologisation of buildings, meaning wide use of such renewable materials. Such natural materials replace synthetic products which consume a large quantity of energy. The most common and the cheapest natural materials in Latvia are cellulose-based (wood and agricultural plants). The ecological aspects of such materials are well known, but experimental data about physical properties remains lacking. In this study, six different samples of wood wool panels and a mixture of hemp shives and lime (hempcrete) are analysed. Thermal conductivity and heat capacity measurements were carried out for wood wool and cement panels using the calibrated hot plate device. Water vapor permeability was tested for hempcrete material by using the gravimetric dry cup method. Studied wood wool panels are eco-friendly and harmless material, which is widely used in the interior design of public and residential buildings, where noise absorption and sound insulation is of importance. They are also suitable for high humidity facilities (e.g., swimming pools). The difference in panels was the width of used wood wool, which is linked to their density. The results of measured thermal conductivity are in a wide range, showing the worsening of properties with the increasing of the wool width (for the least dense 0.066, for the densest 0.091 W/(m·K)). Comparison with mineral insulation materials shows that thermal conductivity for such materials are 2-3 times higher and are comparable to plywood and fibreboard. Measured heat capacity was in a narrower range; here, the dependence on the wool width was not so strong due to the fact that heat capacity value is related to mass, not volume. The resulting heat capacity is a combination of two main components. A comparison of results for different panels allows to select the most suitable sample for a specific application because the dependencies of the thermal insulation and heat capacity properties on the wool width are not the same. Hempcrete is a much denser material compared to conventional thermal insulating materials. Therefore, its use helps to reinforce the structural capacity of the constructional framework, at the same time, it is lightweight. By altering the proportions of the ingredients, hempcrete can be produced as a structural, thermal, or moisture absorbent component. The water absorption and water vapor permeability are the most important properties of these materials. Information about absorption can be found in the literature, but there are no data about water vapor transmission properties. Water vapor permeability was tested for a sample of locally made hempcrete using different air humidity values to evaluate the possible difference. The results show only the slight influence of the air humidity on the water vapor permeability value. The absolute ‘sd value’ measured is similar to mineral wool and wood fiberboard, meaning that due to very low resistance, water vapor passes easily through the material. At the same time, other properties – structural and thermal of the hempcrete is totally different. As a result, an experimentally-based knowledge of thermal and water vapor transmission properties for cellulose-based materials was significantly improved.

Keywords: heat capacity, hemp concrete, thermal conductivity, water vapor transmission, wood wool

Procedia PDF Downloads 218
465 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method

Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad

Abstract:

Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.

Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method

Procedia PDF Downloads 366
464 Metaphysics of the Unified Field of the Universe

Authors: Santosh Kaware, Dnyandeo Patil, Moninder Modgil, Hemant Bhoir, Debendra Behera

Abstract:

The Unified Field Theory has been an area of intensive research since many decades. This paper focuses on philosophy and metaphysics of unified field theory at Planck scale - and its relationship with super string theory and Quantum Vacuum Dynamic Physics. We examined the epistemology of questions such as - (1) what is the Unified Field of universe? (2) can it actually - (a) permeate the complete universe - or (b) be localized in bound regions of the universe - or, (c) extend into the extra dimensions? - -or (d) live only in extra dimensions? (3) What should be the emergent ontological properties of Unified field? (4) How the universe is manifesting through its Quantum Vacuum energies? (5) How is the space time metric coupled to the Unified field? We present a number of ansatz - which we outline below. It is proposed that the unified field possesses consciousness as well as a memory - a recording of past history - analogous to ‘Consistent Histories’ interpretation of quantum mechanics. We proposed Planck scale geometry of Unified Field with circle like topology and having 32 energy points on its periphery which are the connected to each other by 10 dimensional meta-strings which are sources for manifestation of different fundamentals forces and particles of universe through its Quantum Vacuum energies. It is also proposed that the sub energy levels of ‘Conscious Unified Field’ are used for the process of creation, preservation and rejuvenation of the universe over a period of time by means of negentropy. These epochs can be for the complete universe, or for localized regions such as galaxies or cluster of galaxies. It is proposed that Unified field operates through geometric patterns of its Quantum Vacuum energies - manifesting as various elementary particles by giving spins to zero point energy elements. Epistemological relationship between unified field theory and super-string theories is examined. Properties of ‘consciousness’ and 'memory' cascades from universe, into macroscopic objects - and further onto the elementary particles - via a fractal pattern. Other properties of fundamental particles - such as mass, charge, spin, iso-spin also spill out of such a cascade. The manifestations of the unified field can reach into the parallel universes or the ‘multi-verse’ and essentially have an existence independent of the space-time. It is proposed that mass, length, time scales of the unified theory are less than even the Planck scale - and can be called at a level which we call that of 'Super Quantum Gravity (SQG)'.

Keywords: super string theory, Planck scale geometry, negentropy, super quantum gravity

Procedia PDF Downloads 265