Search results for: sparse autoencoder
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 193

Search results for: sparse autoencoder

43 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover

Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae

Abstract:

Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.

Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling

Procedia PDF Downloads 93
42 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 151
41 Household Food Security and Poverty Reduction in Cameroon

Authors: Bougema Theodore Ntenkeh, Chi-bikom Barbara Kyien

Abstract:

The reduction of poverty and hunger sits at the heart of the United Nations 2030 Agenda for Sustainable Development, and are the first two of the Sustainable Development Goals. The World Food Day celebrated on the 16th of October every year, highlights the need for people to have physical and economic access at all times to enough nutritious and safe food to live a healthy and active life; while the world poverty day celebrated on the 17th of October is an opportunity to acknowledge the struggle of people living in poverty, a chance for them to make their concerns heard, and for the community to recognize and support poor people in their fight against poverty. The association between household food security and poverty reduction is not only sparse in Cameroon but mostly qualitative. The paper therefore investigates the effect of household food security on poverty reduction in Cameroon quantitatively using data from the Cameroon Household Consumption Survey collected by the Government Statistics Office. The methodology employed five indicators of household food security using the Multiple Correspondence Analysis and poverty is captured as a dummy variable. Using a control function technique, with pre and post estimation test for robustness, the study postulates that household food security has a positive and significant effect on poverty reduction in Cameroon. A unit increase in the food security score reduces the probability of the household being poor by 31.8%, and this effect is statistically significant at 1%. The result further illustrates that the age of the household head and household size increases household poverty while households residing in urban areas are significantly less poor. The paper therefore recommends that households should diversify their food intake to enhance an effective supply of labour in the job market as a strategy to reduce household poverty. Furthermore, family planning methods should be encouraged as a strategy to reduce birth rate for an equitable distribution of household resources including food while the government of Cameroon should also develop the rural areas given that trend in urbanization are associated with the concentration of productive economic activities, leading to increase household income, increased household food security and poverty reduction.

Keywords: food security, poverty reduction, SDGs, Cameroon

Procedia PDF Downloads 37
40 Effectiveness of High-Intensity Interval Training in Overweight Individuals between 25-45 Years of Age Registered in Sports Medicine Clinic, General Hospital Kalutara

Authors: Dimuthu Manage

Abstract:

Introduction: The prevalence of obesity and obesity-related non-communicable diseases are becoming a massive health concern in the whole world. Physical activity is recognized as an effective solution for this matter. The published data on the effectiveness of High-Intensity Interval Training (HIIT) in improving health parameters in overweight and obese individuals in Sri Lanka is sparse. Hence this study is conducted. Methodology: This is a quasi-experimental study that was conducted at the Sports medicine clinic, General Hospital, Kalutara. Participants have engaged in a programme of HIIT three times per week for six weeks. Data collection was based on precise measurements by using structured and validated methods. Ethical clearance was obtained. Results: Registered number for the study was 48, and only 52% have completed the study. The mean age was 32 (SD=6.397) years, with 64% males. All the anthropometric measurements which were assessed (i.e. waist circumference(P<0.001), weight(P<0.001) and BMI(P<0.001)), body fat percentage(P<0.001), VO2 max(P<0.001), and lipid profile (ie. HDL(P=0.016), LDL(P<0.001), cholesterol(P<0.001), triglycerides(P<0.010) and LDL: HDL(P<0.001)) had shown statistically significant improvement after the intervention with the HIIT programme. Conclusions: This study confirms HIIT as a time-saving and effective exercise method, which helps in preventing obesity as well as non-communicable diseases. HIIT ameliorates body anthropometry, fat percentage, cardiopulmonary status, and lipid profile in overweight and obese individuals markedly. As with the majority of studies, the design of the current study is subject to some limitations. The first is the study focused on a correlational study. If it is a comparative study, comparing it with other methods of training programs would have given more validity. Although the validated tools used to measure variables and the same tools used in pre and post-exercise occasions with the available facilities, it would have been better to measure some of them using gold-standard methods. However, this evidence should be further assessed in larger-scale trials using comparative groups to generalize the efficacy of the HIIT exercise program.

Keywords: HIIT, lipid profile, BMI, VO2 max

Procedia PDF Downloads 35
39 Physical Activity and Mental Health: A Cross-Sectional Investigation into the Relationship of Specific Physical Activity Domains and Mental Well-Being

Authors: Katja Siefken, Astrid Junge

Abstract:

Background: Research indicates that physical activity (PA) protects us from developing mental disorders. The knowledge regarding optimal domain, intensity, type, context, and amount of PA promotion for the prevention of mental disorders is sparse and incoherent. The objective of this study is to determine the relationship between PA domains and mental well-being, and whether associations vary by domain, amount, context, intensity, and type of PA. Methods: 310 individuals (age: 25 yrs., SD 7; 73% female) completed a questionnaire on personal patterns of their PA behaviour (IPQA) and their mental health (Centre of Epidemiologic Studies Depression Scale (CES-D), Generalized Anxiety Disorder (GAD-7) scale, the subjective physical well-being (FEW-16)). Linear and multiple regression were used for analysis. Findings: Individuals who met the PA recommendation (N=269) reported higher scores on subjective physical well-being than those who did not meet the PA recommendations (N=41). Whilst vigorous intensity PA predicts subjective well-being (β = .122, p = .028), it also correlates with depression. The more vigorously physically active a person is, the higher the depression score (β = .127, p = .026). The strongest impact of PA on mental well-being can be seen in the transport domain. A positive linear correlation on subjective physical well-being (β =.175, p = .002), and a negative linear correlation for anxiety (β =-.142, p = .011) and depression (β = -.164, p = .004) was found. Multiple regression analysis indicates similar results: Time spent in active transport on the bicycle significantly lowers anxiety and depression scores and enhances subjective physical well-being. The more time a participant spends using the bicycle for transport, the lower the depression (β = -.143, p = .013) and anxiety scores (β = -.111,p = .050). Conclusions: Meeting the PA recommendations enhances subjective physical well-being. Active transport has a substantial impact on mental well-being. Findings have implications for policymakers, employers, public health experts and civil society. A stronger focus on the promotion and protection of health through active transport is recommended. Inter-sectoral exchange, outside the health sector, is required. Health systems must engage other sectors in adopting policies that maximize possible health gains.

Keywords: active transport, mental well-being, health promotion, psychological disorders

Procedia PDF Downloads 296
38 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 249
37 Determinants of Quality of Life Among Refugees Aging Out of Place

Authors: Jonix Owino

Abstract:

Aging Out of Place refers to the physical and emotional experience of growing older in a foreign or unfamiliar environment. Refugees flee their home countries and migrate to foreign countries such as the United States for safety. The emotional and psychological distress experienced by refugees who are compelled to leave their home countries can compromise their ability to adapt to new countries, thereby affecting their well-being. In particular, implications of immigration may be felt more acutely in later life stages, especially when life-long attachments have been made in the country of origin. However, aging studies in the United States have failed to conceptualize refugee aging experiences, more so for refugees who entered the country as adults. Specifically, little is known about the quality of life among aging refugees. Research studies on whether the quality of life varies among refugees by sociodemographic factors are limited. Research studies examining the role of social connectedness in aging refugees’ quality of life are also sparse. As such, the present study seeks to investigate the sociodemographic (i.e., age, sex, country of origin, and length of residence) and social connection factors associated with quality of life among aging refugees. The study consisted of a total of 108 participants from ages 50 years and above. The refugees represented in the study were from Bhutan, Burundi, and Somalia and were recruited from an upper Midwestern region of the United States. The participants completed an in-depth survey assessing social factors and well-being. Hierarchical regression was used for analysis. The results showed that females, older individuals, and refugees who were from Africa reported lower quality of life. Length of residence was not associated with quality of life. Furthermore, when controlling for sociodemographic factors, greater social integration was significantly associated with a higher quality of life, whereas lower loneliness was significantly associated with a higher quality of life. The results also indicated a significant interaction between loneliness and sex in predicting quality of life. This suggests that greater loneliness was associated with reduced quality of life for female refugees but not males. The present study highlights cultural variations within refugee groups which is important in determining how host communities can best support aging refugees’ well-being and develop social programs that can effectively cater to issues of aging among refugees.

Keywords: aging refugees, quality of life, social integration, migration and integration

Procedia PDF Downloads 73
36 Characterization of Complex Gold Ores for Preliminary Process Selection: The Case of Kapanda, Ibindi, Mawemeru, and Itumbi in Tanzania

Authors: Sospeter P. Maganga, Alphonce Wikedzi, Mussa D. Budeba, Samwel V. Manyele

Abstract:

This study characterizes complex gold ores (elemental and mineralogical composition, gold distribution, ore grindability, and mineral liberation) for preliminary process selection. About 200 kg of ore samples were collected from each location using systematic sampling by mass interval. Ores were dried, crushed, milled, and split into representative sub-samples (about 1 kg) for elemental and mineralogical composition analyses using X-ray fluorescence (XRF), fire assay finished with Atomic Absorption Spectrometer (AAS), and X-ray Diffraction (XRD) methods, respectively. The gold distribution was studied on size-by-size fractions, while ore grindability was determined using the standard Bond test. The mineral liberation analysis was conducted using ThermoFisher Scientific Mineral Liberation Analyzer (MLA) 650, where unsieved polished grain mounts (80% passing 700 µm) were used as MLA feed. Two MLA measurement modes, X-ray modal analysis (XMOD) and sparse phase liberation-grain X-ray mapping analysis (SPL-GXMAP), were employed. At least two cyanide consumers (Cu, Fe, Pb, and Zn) and kinetics impeders (Mn, S, As, and Bi) were present in all locations investigated. Copper content at Kapanda (0.77% Cu) and Ibindi (7.48% Cu) exceeded the recommended threshold of 0.5% Cu for direct cyanidation. The gold ore at Ibindi indicated a higher rate of grinding compared to other locations. This could be explained by the highest grindability (2.119 g/rev.) and lowest Bond work index (10.213 kWh/t) values. The pyrite-marcasite, chalcopyrite, galena, and siderite were identified as major gold, copper, lead, and iron-bearing minerals, respectively, with potential for economic extraction. However, only gold and copper can be recovered under conventional milling because of grain size issues (galena is exposed by 10%) and process complexity (difficult to concentrate and smelt iron from siderite). Therefore, the preliminary process selection is copper flotation followed by gold cyanidation for Kapanda and Ibindi ores, whereas gold cyanidation with additives such as glycine or ammonia is selected for Mawemeru and Itumbi ores because of low concentrations of Cu, Pb, Fe, and Zn minerals.

Keywords: complex gold ores, mineral liberation, ore characterization, ore grindability

Procedia PDF Downloads 47
35 International Retirement Migration of Westerners to Thailand: Well-Being and Future Migration Plans

Authors: Kanokwan Tangchitnusorn, Patcharawalai Wongboonsin

Abstract:

Following the ‘Golden Age of Welfare’ which enabled post-war prosperity to European citizens in 1950s, the world has witnessed the increasing mobility across borders of older citizens of First World countries. Then, in 1990s, the international retirement migration (IRM) of older persons has become a prominent trend, in which, it requires the integration of several fields of knowledge to explain, i.e. migration studies, tourism studies, as well as, social gerontology. However, while the studies of the IRM to developed destinations in Europe (e.g. Spain, Malta, Portugal, Italy), and the IRM to developing countries like Mexico, Panama, and Morocco have been largely studied in recent decades due to their massive migration volume, the study of the IRM to remoter destinations has been far more relatively sparse and incomplete. Developing countries in Southeast Asia have noticed the increasing number of retired expats, particularly to Thailand, where the number of foreigners applying for retirement visa increased from 10,709 in 2005 to 60,046 in 2014. Additionally, it was evident that the majority of Thailand’s retirement visa applicants were Westerners, i.e. citizens of the United Kingdom, the United States, Germany, and the Nordic countries, respectively. As such trend just becoming popular in Thailand in recent decades, little is known about the IRM populations, their well-being, and their future migration plans. This study aimed to examine the subjective wellbeing or the self-evaluations of own well-being among Western retirees in Thailand, as well as, their future migration plans as whether they planned to stay here for life or otherwise. The author employed a mixed method to obtain both quantitative and qualitative data during October 2015 – May 2016, including 330 self-administered questionnaires (246 online and 84 hard-copied responses), and 21 in-depth interviews of the Western residents in Nan (2), Pattaya (4), and Chiang Mai (15). As derived from the integration of previous subjective well-being measurements (i.e. Personal Wellbeing Index (PWI), Global AgeWatch Index, and OECD guideline on measuring subjective wellbeing), this study would measure the subjective well-being of Western retirees in Thailand in 7 dimensions, including standard of living, health status, personal relationships, social connections, environmental quality, personal security and local infrastructure.

Keywords: international retirement migration, ageing, mobility, wellbeing, Western, Thailand

Procedia PDF Downloads 310
34 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components

Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura

Abstract:

This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.

Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding

Procedia PDF Downloads 115
33 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico

Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada

Abstract:

In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.

Keywords: GIS, landslide, modeling, LOGISNET, SINMAP

Procedia PDF Downloads 283
32 Advancements in AI Training and Education for a Future-Ready Healthcare System

Authors: Shamie Kumar

Abstract:

Background: Radiologists and radiographers (RR) need to educate themselves and their colleagues to ensure that AI is integrated safely, useful, and in a meaningful way with the direction it always benefits the patients. AI education and training are fundamental to the way RR work and interact with it, such that they feel confident using it as part of their clinical practice in a way they understand it. Methodology: This exploratory research will outline the current educational and training gaps for radiographers and radiologists in AI radiology diagnostics. It will review the status, skills, challenges of educating and teaching. Understanding the use of artificial intelligence within daily clinical practice, why it is fundamental, and justification on why learning about AI is essential for wider adoption. Results: The current knowledge among RR is very sparse, country dependent, and with radiologists being the majority of the end-users for AI, their targeted training and learning AI opportunities surpass the ones available to radiographers. There are many papers that suggest there is a lack of knowledge, understanding, and training of AI in radiology amongst RR, and because of this, they are unable to comprehend exactly how AI works, integrates, benefits of using it, and its limitations. There is an indication they wish to receive specific training; however, both professions need to actively engage in learning about it and develop the skills that enable them to effectively use it. There is expected variability amongst the profession on their degree of commitment to AI as most don’t understand its value; this only adds to the need to train and educate RR. Currently, there is little AI teaching in either undergraduate or postgraduate study programs, and it is not readily available. In addition to this, there are other training programs, courses, workshops, and seminars available; most of these are short and one session rather than a continuation of learning which cover a basic understanding of AI and peripheral topics such as ethics, legal, and potential of AI. There appears to be an obvious gap between the content of what the training program offers and what the RR needs and wants to learn. Due to this, there is a risk of ineffective learning outcomes and attendees feeling a lack of clarity and depth of understanding of the practicality of using AI in a clinical environment. Conclusion: Education, training, and courses need to have defined learning outcomes with relevant concepts, ensuring theory and practice are taught as a continuation of the learning process based on use cases specific to a clinical working environment. Undergraduate and postgraduate courses should be developed robustly, ensuring the delivery of it is with expertise within that field; in addition, training and other programs should be delivered as a way of continued professional development and aligned with accredited institutions for a degree of quality assurance.

Keywords: artificial intelligence, training, radiology, education, learning

Procedia PDF Downloads 53
31 Facile Wick and Oil Flame Synthesis of High-Quality Hydrophilic Carbon Nano Onions for Flexible Binder-Free Supercapacitor

Authors: Debananda Mohapatra, Subramanya Badrayyana, Smrutiranjan Parida

Abstract:

Carbon nano-onions (CNOs) are the spherical graphitic nanostructures composed of concentric shells of graphitic carbon can be hypothesized as the intermediate state between fullerenes and graphite. These are very important members in fullerene family also known as the multi-shelled fullerenes can be envisioned as promising supercapacitor electrode with high energy & power density as they provide easy access to ions at electrode-electrolyte interface due to their curvature. There is still very sparse report concerning on CNOs as electrode despite having an excellent electrodechemical performance record due to their unavailability and lack of convenient methods for their high yield preparation and purification. Keeping all these current pressing issues in mind, we present a facile scalable and straightforward flame synthesis method of pure and highly dispersible CNOs without contaminated by any other forms of carbon; hence, a post processing purification procedure is not necessary. To the best of our knowledge, this is the very first time; we developed an extremely simple, light weight, novel inexpensive, flexible free standing pristine CNOs electrode without using any binder element. Locally available daily used cotton wipe has been used for fabrication of such an ideal electrode by ‘dipping and drying’ process providing outstanding stretchability and mechanical flexibility with strong adhesion between CNOs and porous wipe. The specific capacitance 102 F/g, energy density 3.5 Wh/kg and power density 1224 W/kg at 20 mV/s scan rate are the highest values that ever recorded and reported so far in symmetrical two electrode cell configuration with 1M Na2SO4 electrolyte; indicating a very good synthesis conditions employed with optimum pore size in agreement with electrolyte ion size. This free standing CNOs electrode also showed an excellent cyclic performance and stability retaining 95% original capacity after 5000 charge –discharge cycles. Furthermore, this unique method not only affords binder free - freestanding electrode but also provide a general way of fabricating such multifunctional promising CNOs based nanocomposites for their potential device applications in flexible solar cells and lithium-ion batteries.

Keywords: binder-free, flame synthesis, flexible, carbon nano onion

Procedia PDF Downloads 171
30 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 110
29 Predictors of Quality of Life among Older Refugees Aging out of Place

Authors: Jonix Owino, Heather Fuller

Abstract:

Refugees flee from their home countries due to civil unrest, war, persecution and migrate to Western countries such as the United States in search of a safe haven. Transitioning into a new society and culture can be challenging, thereby affecting refugee’s quality of life and well-being in the host communities. Moreover, as individuals age, they experience physical, cognitive and socioemotional changes that may impact their quality of life. However, little is known about the predictors of quality of life among aging refugees. It is not clear how quality of life varies by age, that is, between midlife refugees in comparison to their older counterparts. In addition to age, other sociodemographic factors such as gender, socioeconomic status, or country of origin are likely to have differential associations to quality of life, yet research on such variations among older refugees is sparse. Thus the present study seeks to explore factors associated with quality of life by asking the following research questions: 1) Do sociodemographic factors (such as age and gender) predict quality of life among older refugees, 2) Is there an association between social integration and quality of life, and 3) Is there an association between migratory related experiences (such as post migratory adjustments) and quality of life. The present study recruited 90 refugees (primarily originating from Bhutan, Somalia, Burundi, and Sudan) aged 50 or older living in the US. The participants completed a structured questionnaire which assessed factors such as participant’s sociodemographic attributes (e.g., age, gender, length of residence in the US, country of origin, employment, level of education, and marital status), and validated measures of social integration, post-migration living difficulties, and quality of life. Preliminary results suggest sociodemographic variability in quality of life among these refugees. Further analyses will be conducted using hierarchical regression analyses to address the following hypotheses: first, it is hypothesized that quality of life will vary by age and gender such that younger refugees and men will report higher quality of life. Second, it is expected that refugees with greater levels of social integration will also report better quality of life. Finally, post-migration factors such as language barriers and family stress are hypothesized to predict poorer quality of life. Further results will be analyzed, including potential moderating effects of age and gender, and resulting findings will be interpreted and discussed. The findings from this study have potential implications for communities on how they can better support older refugees as well as develop social programs that can effectively cater to their well-being. Conclusions will be drawn and discussed in light of policies related to both aging and refugee migration within the context of the US.

Keywords: aging out of place, migration, older refugees, quality of life, social integration

Procedia PDF Downloads 76
28 Cosmetic Recommendation Approach Using Machine Learning

Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake

Abstract:

The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.

Keywords: content-based filtering, cosmetics, machine learning, recommendation system

Procedia PDF Downloads 108
27 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 129
26 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 98
25 An Analysis of the Strategic Pathway to Building a Successful Mobile Advertising Business in Nigeria: From Strategic Intent to Competitive Advantage

Authors: Pius A. Onobhayedo, Eugene A. Ohu

Abstract:

Nigeria has one of the fastest growing mobile telecommunications industry in the world. In the absence of fixed connection access to the Internet, access to the Internet is primarily via mobile devices. It, therefore, provides a test case for how to penetrate the mobile market in an emerging economy. We also hope to contribute to a sparse literature on strategies employed in building successful data-driven mobile businesses in emerging economies. We, therefore, sought to identify and analyse the strategic approach taken in a successful locally born mobile data-driven business in Nigeria. The analysis was carried out through the framework of strategic intent and competitive advantages developed from the conception of the company to date. This study is based on an exploratory investigation of an innovative digital company based in Nigeria specializing in the mobile advertising business. The projected growth and high adoption of mobile in this African country, coinciding with the smartphone revolution triggered by the launch of iPhone in 2007 opened a new entrepreneurial horizon for the founder of the company, who reached the conclusion that ‘the future is mobile’. This dream led to the establishment of three digital businesses, designed for convergence and complementarity of medium and content. The mobile Ad subsidiary soon grew to become a truly African network with operations and campaigns across West, East and South Africa, successfully delivering campaigns in several African countries including Nigeria, Kenya, South Africa, Ghana, Uganda, Zimbabwe, and Zambia amongst others. The company recently declared a 40% year-end profit which was nine times that of the previous financial year. This study drew from an in-depth interview with the company’s founder, analysis of primary and secondary data from and about the business, as well as case studies of digital marketing campaigns. We hinge our analysis on the strategic intent concept which has been proposed to be an engine that drives the quest for sustainable strategic advantage in the global marketplace. Our goal was specifically to identify the strategic intents of the founder and how these were transformed creatively into processes that may have led to some distinct competitive advantages. Along with the strategic intents, we sought to identify the respective absorptive capacities that constituted favourable antecedents to the creation of such competitive advantages. Our recommendations and findings will be pivotal information for anybody wishing to invest in the world’s fastest technology business space - Africa.

Keywords: Africa, competitive advantage, competitive strategy, digital, mobile business, marketing, strategic intent

Procedia PDF Downloads 413
24 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 122
23 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 65
22 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 129
21 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates

Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc

Abstract:

Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.

Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS

Procedia PDF Downloads 332
20 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study

Authors: Nejoud Alostad, Anup Bora, Prashant Dhote

Abstract:

Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.

Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation

Procedia PDF Downloads 172
19 Nudging the Criminal Justice System into Listening to Crime Victims in Plea Agreements

Authors: Dana Pugach, Michal Tamir

Abstract:

Most criminal cases end with a plea agreement, an issue whose many aspects have been discussed extensively in legal literature. One important feature, however, has gained little notice, and that is crime victims’ place in plea agreements following the federal Crime Victims Rights Act of 2004. This law has provided victims some meaningful and potentially revolutionary rights, including the right to be heard in the proceeding and a right to appeal against a decision made while ignoring the victim’s rights. While victims’ rights literature has always emphasized the importance of such right, references to this provision in the general literature about plea agreements are sparse, if existing at all. Furthermore, there are a few cases only mentioning this right. This article purports to bridge between these two bodies of legal thinking – the vast literature concerning plea agreements and victims’ rights research– by using behavioral economics. The article will, firstly, trace the possible structural reasons for the failure of this right to be materialized. Relevant incentives of all actors involved will be identified as well as their inherent consequential processes that lead to the victims’ rights malfunction. Secondly, the article will use nudge theory in order to suggest solutions that will enhance incentives for the repeat players in the system (prosecution, judges, defense attorneys) and lead to the strengthening of weaker group’s interests – the crime victims. Behavioral psychology literature recognizes that the framework in which an individual confronts a decision can significantly influence his decision. Richard Thaler and Cass Sunstein developed the idea of ‘choice architecture’ - ‘the context in which people make decisions’ - which can be manipulated to make particular decisions more likely. Choice architectures can be changed by adjusting ‘nudges,’ influential factors that help shape human behavior, without negating their free choice. The nudges require decision makers to make choices instead of providing a familiar default option. In accordance with this theory, we suggest a rule, whereby a judge should inquire the victim’s view prior to accepting the plea. This suggestion leaves the judge’s discretion intact; while at the same time nudges her not to go directly to the default decision, i.e. automatically accepting the plea. Creating nudges that force actors to make choices is particularly significant when an actor intends to deviate from routine behaviors but experiences significant time constraints, as in the case of judges and plea bargains. The article finally recognizes some far reaching possible results of the suggestion. These include meaningful changes to the earlier stages of criminal process even before reaching court, in line with the current criticism of the plea agreements machinery.

Keywords: plea agreements, victims' rights, nudge theory, criminal justice

Procedia PDF Downloads 300
18 Germline Mutations of Mitogen-Activated Protein Kinases Pathway Signaling Pathway Genes in Children

Authors: Nouha Bouayed Abdelmoula, Rim Louati, Nawel Abdellaoui, Balkiss Abdelmoula, Oldez Kaabi, Walid Smaoui, Samir Aloulou

Abstract:

Background and Aims: Cardiofaciocutaneous syndrome (CFC) is an autosomal dominant disorder with the vast majority of cases arising by a new mutation of BRAF, MEK1, MEK2, or rarely, KRAS genes. Here, we report a rare Tunisian case of CFC syndrome for whom we identify SOS1 mutation. Methods: Genomic DNA was obtained from peripheral blood collected in an EDTA tube and extracted from leukocytes using the phenol/chloroform method according to standard protocols. High resolution melting (HRM) analysis for screening of mutations in the entire coding sequence of PTPN11 was conducted first. Then, HRM assays to look for hot spot mutations coding regions of the other genes of the RAS-MAPK pathway (RAt Sarcoma viral oncogene homolog Mitogen-Activated Protein Kinases Pathway): SOS1, SHOC2, KRAS, RAF1, KRAS, NRAS, CBL, BRAF, MEK1, MEK2, HRAS, and RIT1, were applied. Results: Heterozygous SOS1 point mutation clustered in exon 10, which encodes for the PH domain of SOS1, was identified: c.1655 G > A. The patient was a 9-year-old female born from a consanguineous couple. She exhibited pulmonic valvular stenosis as congenital heart disease. She had facial features and other malformations of Noonan syndrome, including macrocephaly, hypertelorism, ptosis, downslanting palpebral fissures, sparse eyebrows, a short and broad nose with upturned tip, low-set ears, high forehead commonly associated with bitemporal narrowing and prominent supraorbital ridges, short and/or webbed neck and short stature. However, the phenotype is also suggestive of CFC syndrome with the presence of more severe ectodermal abnormalities, including curly hair, keloid scars, hyperkeratotic skin, deep plantar creases, and delayed permanent dentition with agenesis of the right maxillary first molar. Moreover, the familial history of the patient revealed recurrent brain malignancies in the paternal family and epileptic disease in the maternal family. Conclusions: This case report of an overlapping RASopathy associated with SOS1 mutation and familial history of brain tumorigenesis is exceptional. The evidence suggests that RASopathies are truly cancer-prone syndromes, but the magnitude of the cancer risk and the types of cancer partially overlap.

Keywords: cardiofaciocutaneous syndrome, CFC, SOS1, brain cancer, germline mutation

Procedia PDF Downloads 119
17 Cognition in Context: Investigating the Impact of Persuasive Outcomes across Face-to-Face, Social Media and Virtual Reality Environments

Authors: Claire Tranter, Coral Dando

Abstract:

Gathering information from others is a fundamental goal for those concerned with investigating crime, and protecting national and international security. Persuading an individual to move from an opposing to converging viewpoint, and an understanding on the cognitive style behind this change can serve to increase understanding of traditional face-to-face interactions, as well as synthetic environments (SEs) often used for communication across varying geographical locations. SEs are growing in usage, and with this increase comes an increase in crime being undertaken online. Communication technologies can allow people to mask their real identities, supporting anonymous communication which can raise significant challenges for investigators when monitoring and managing these conversations inside SEs. To date, the psychological literature concerning how to maximise information-gain in SEs for real-world interviewing purposes is sparse, and as such this aspect of social cognition is not well understood. Here, we introduce an overview of a novel programme of PhD research which seeks to enhance understanding of cross-cultural and cross-gender communication in SEs for maximising information gain. Utilising a dyadic jury paradigm, participants interacted with a confederate who attempted to persuade them to the opposing verdict across three distinct environments: face-to-face, instant messaging, and a novel virtual reality environment utilising avatars. Participants discussed a criminal scenario, acting as a two-person (male; female) jury. Persuasion was manipulated by the confederate claiming an opposing viewpoint (guilty v. not guilty) to the naïve participants from the outset. Pre and post discussion data, and observational digital recordings (voice and video) of participant’ discussion performance was collected. Information regarding cognitive style was also collected to ascertain participants need for cognitive closure and biases towards jumping to conclusions. Findings revealed that individuals communicating via an avatar in a virtual reality environment reacted in a similar way, and thus equally persuasive, when compared to individuals communicating face-to-face. Anonymous instant messaging however created a resistance to persuasion in participants, with males showing a significant decline in persuasive outcomes compared to face to face. The findings reveal new insights particularly regarding the interplay of persuasion on gender and modality, with anonymous instant messaging enhancing resistance to persuasion attempts. This study illuminates how varying SE can support new theoretical and applied understandings of how judgments are formed and modified in response to advocacy.

Keywords: applied cognition, persuasion, social media, virtual reality

Procedia PDF Downloads 123
16 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta

Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi

Abstract:

Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.

Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin

Procedia PDF Downloads 95
15 Need for Elucidation of Palaeoclimatic Variability in the High Himalayan Mountains: A Multiproxy Approach

Authors: Sheikh Nawaz Ali, Pratima Pandey, P. Morthekai, Jyotsna Dubey, Md. Firoze Quamar

Abstract:

The high mountain glaciers are one of the most sensitive recorders of climate changes, because they have the tendency to respond to the combined effect of snow fall and temperature. The Himalayan glaciers have been studied with a good pace during the last decade. However, owing to its large ecological diversity and geographical vividness, major part of the Indian Himalaya is uninvestigated, and hence the palaeoclimatic patterns as well as the chronology of past glaciations in particular remain controversial for the entire Indian Himalayan transect. Although the Himalayan glaciers are nourished by two important climatic systems viz. the southwest summer monsoon and the mid-latitude westerlies, however, the influence of these systems is yet to be understood. Nevertheless, existing chronology (mostly exposure ages) indicate that irrespective of the geographical position, glaciers seem to grow during enhanced Indian summer monsoon (ISM). The Himalayan mountain glaciers are referred to the third pole or water tower of Asia as they form a huge reservoir of the fresh water supplies for the Asian countries. Mountain glaciers are sensitive probes of the local climate, and, thus, they present an opportunity and a challenge to interpret climates of the past as well as to predict future changes. The principle object of all the palaeoclimatic studies is to develop a futuristic models/scenario. However, it has been found that the glacial chronologies bracket the major phases of climatic events only, and other climatic proxies are sparse in Himalaya. This is the reason that compilation of data for rapid climatic change during the Holocene shows major gaps in this region. The sedimentation in proglacial lakes, conversely, is more continuous and, hence, can be used to reconstruct a more complete record of past climatic variability that is modulated by changing ice volume of the valley glacier. The Himalayan region has numerous proglacial lacustrine deposits formed during the late Quaternary period. However, there are only few such deposits which have been studied so far. Therefore, this is the high time when efforts have to be made to systematically map the moraines located in different climatic zones, reconstruct the local and regional moraine stratigraphy and use multiple dating techniques to bracket the events of glaciation. Besides this, emphasis must be given on carrying multiproxy studies on the lacustrine sediments that will provide a high resolution palaeoclimatic data from the alpine region of the Himalaya. Although the Himalayan glaciers fluctuated in accordance with the changing climatic conditions (natural forcing), however, it is too early to arrive at any conclusion. It is very crucial to generate multiproxy data sets covering wider geographical and ecological domains taking into consideration multiple parameters that directly or indirectly influence the glacier mass balance as well as the local climate of a region.

Keywords: glacial chronology, palaeoclimate, multiproxy, Himalaya

Procedia PDF Downloads 230
14 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 416