Search results for: cumulative probabilities
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 553

Search results for: cumulative probabilities

73 Nano-Pesticides: Recent Emerging Tool for Sustainable Agricultural Practices

Authors: Ekta, G. K. Darbha

Abstract:

Nanotechnology offers the potential of simultaneously increasing efficiency as compared to their bulk material as well as reducing harmful environmental impacts of pesticides in field of agriculture. The term nanopesticide covers different pesticides that are cumulative of several surfactants, polymers, metal ions, etc. of nanometer size ranges from 1-1000 nm and exhibit abnormal behavior (high efficacy and high specific surface area) of nanomaterials. Commercial formulations of pesticides used by farmers nowadays cannot be used effectively due to a number of problems associated with them. For example, more than 90% of applied formulations are either lost in the environment or unable to reach the target area required for effective pest control. Around 20−30% of pesticides are lost through emissions. A number of factors (application methods, physicochemical properties of the formulations, and environmental conditions) can influence the extent of loss during application. It is known that among various formulations, polymer-based formulations show the greatest potential due to their greater efficacy, slow release and protection against premature degradation of active ingredient as compared to other commercial formulations. However, the nanoformulations can have a significant effect on the fate of active ingredient as well as may release some new ingredients by reacting with existing soil contaminants. Environmental fate of these newly generated species is still not explored very well which is essential to field scale experiments and hence a lot to be explored in the field of environmental fate, nanotoxicology, transport properties and stability of such formulations. In our preliminary work, we have synthesized polymer based nanoformulation of commercially used weedicide atrazine. Atrazine belongs to triazine class of herbicide, which is used in the effective control of seed germinated dicot weeds and grasses. It functions by binding to the plastoquinone-binding protein in PS-II. Plant death results from starvation and oxidative damage caused by breakdown in electron transport system. The stability of the suspension of nanoformulation containing herbicide has been evaluated by considering different parameters like polydispersity index, particle diameter, zeta-potential under different environmental relevance condition such as pH range 4-10, temperature range from 25°C to 65°C and stability of encapsulation also have been studied for different amount of added polymer. Morphological characterization has been done by using SEM.

Keywords: atrazine, nanoformulation, nanopesticide, nanotoxicology

Procedia PDF Downloads 256
72 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
71 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 123
70 The Effects of Nano Zerovalent Iron (nZVI) and Magnesium Oxide Nanoparticles on Methane Production during Anaerobic Digestion of Waste Activated Sludge

Authors: Passkorn Khanthongthip, John T. Novak

Abstract:

Many studies have been reported that the nZVI and MgO NPs were often found in waste activated sludge (WAS). However, little is known about the impact of those NPs on WAS stabilization. The aims of this study were to investigate the effects of both NPs on WAS anaerobic digestion for methane production and to examine the change of metanogenic population under those different environments using qPCR. Four dosages (2, 50, 100, and 200 mg/g-TSS) of MgO NPs were added to four different bottles containing WAS to investigate the impact of MgO NPs on methane production during WAS anaerobic digestion. The effects of nZVI on methane production during WAS anaerobic digestion were also conducted in another four bottles using the same methods described above except that the MgO NPs were replaced by nZVI. A bottle of WAS anaerobic digestion without nanoparticles addition was also operated to serve as a control. It was found that the relative amounts, compared to the control system, of methane production in each WAS anaerobic digestion bottle adding 2, 50, 100, 200 mg/gTSS MgO NPs were 98, 62, 28, and 14 %, respectively. This suggests that higher MgO NPs resulted in lower methane production. The data of batch test for the effects of corresponding released Mg2+ indicated that 50 mg/gTSS MgO NPs or higher could inhibit methane production at least 25%. Moreover, the volatile fatty acid (VFA) concentration was 328, 384, 928, 3,684, and 7,848 mg/L for the control and four WAS anaerobic digestion bottles with 2, 50, 100, 200 mg/gTSS MgO NPs addition, respectively. Higher VFA concentration could reduce pH and subsequently decrease methanogen growth, resulting in lower methane production. The relative numbers of total gene copies of methanogens analyzed from samples taken from WAS anaerobic digestion bottles were approximately 99, 68, 38, and 24 % of control for the addition of 2, 50, 100, and 200 mg/gTSS, respectively. Obviously, the more MgO NPs appeared in sludge anaerobic digestion system, the less methanogens remained. In contrast, the relative amount of methane production found in another four WAS anaerobic digestion bottles adding 2, 50, 100, and 200 mg/gTSS nZVI were 102, 128, 112, and 104 % of the control, respectively. The measurement of methanogenic population indicated that the relative content of methanogen gene copies were 101, 132, 120, and 112 % of those found in control, respectively. Additionally, the cumulative VFA was 320, 234, 308, and 330 mg/L, respectively. This reveals that nZVI addition could assist to increase methanogenic population. Higher amount of methanogen accelerated VFA degradation for greater methane production, resulting in lower VFA accumulation in digesters. Moreover, the data for effects of corresponding released Fe2+ conducted by batch tests suggest that the addition of approximately 50 mg/gTSS nZVI increased methane production by 20%. In conclusion, the presence of MgO NPs appeared to diminish the methane production during WAS anaerobic digestion. Higher MgO NPs dosages resulted in more inhibition on methane production. In contrast, nZVI addition promoted the amount of methanogenic population which facilitated methane production.

Keywords: magnesium oxide nanoparticles, methane production, methanogenic population, nano zerovalent iron

Procedia PDF Downloads 295
69 Analysis of Ozone Episodes in the Forest and Vegetation Areas with Using HYSPLIT Model: A Case Study of the North-West Side of Biga Peninsula, Turkey

Authors: Deniz Sari, Selahattin İncecik, Nesimi Ozkurt

Abstract:

Surface ozone, which named as one of the most critical pollutants in the 21th century, threats to human health, forest and vegetation. Specifically, in rural areas surface ozone cause significant influences on agricultural productions and trees. In this study, in order to understand to the surface ozone levels in rural areas we focus on the north-western side of Biga Peninsula which covers by the mountainous and forested area. Ozone concentrations were measured for the first time with passive sampling at 10 sites and two online monitoring stations in this rural area from 2013 and 2015. Using with the daytime hourly O3 measurements during light hours (08:00–20:00) exceeding the threshold of 40 ppb over the 3 months (May, June and July) for agricultural crops, and over the six months (April to September) for forest trees AOT40 (Accumulated hourly O3 concentrations Over a Threshold of 40 ppb) cumulative index was calculated. AOT40 is defined by EU Directive 2008/50/EC to evaluate whether ozone pollution is a risk for vegetation, and is calculated by using hourly ozone concentrations from monitoring systems. In the present study, we performed the trajectory analysis by The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to follow the long-range transport sources contributing to the high ozone levels in the region. The ozone episodes observed between 2013 and 2015 were analysed using the HYSPLIT model developed by the NOAA-ARL. In addition, the cluster analysis is used to identify homogeneous groups of air mass transport patterns can be conducted through air trajectory clustering by grouping similar trajectories in terms of air mass movement. Backward trajectories produced for 3 years by HYSPLIT model were assigned to different clusters according to their moving speed and direction using a k-means clustering algorithm. According to cluster analysis results, northerly flows to study area cause to high ozone levels in the region. The results present that the ozone values in the study area are above the critical levels for forest and vegetation based on EU Directive 2008/50/EC.

Keywords: AOT40, Biga Peninsula, HYSPLIT, surface ozone

Procedia PDF Downloads 255
68 Research on Territorial Ecological Restoration in Mianzhu City, Sichuan, under the Dual Evaluation Framework

Authors: Wenqian Bai

Abstract:

Background: In response to the post-pandemic directives of Xi Jinping concerning the new era of ecological civilization, China has embarked on ecological restoration projects across its territorial spaces. This initiative faces challenges such as complex evaluation metrics and subpar informatization standards. Methodology: This research focuses on Mianzhu City, Sichuan Province, to assess its resource and environmental carrying capacities and the appropriateness of land use for development from ecological, agricultural, and urban perspectives. The study incorporates a range of spatial data to evaluate factors like ecosystem services (including water conservation, soil retention, and biodiversity), ecological vulnerability (addressing issues like soil erosion and desertification), and resilience. Utilizing the Minimum Cumulative Resistance model along with the ‘Three Zones and Three Lines’ strategy, the research maps out ecological corridors and significant ecological networks. These frameworks support the ecological restoration and environmental enhancement of the area. Results: The study identifies critical ecological zones in Mianzhu City's northwestern region, highlighting areas essential for protection and particularly crucial for water conservation. The southeastern region is categorized as a generally protected ecological zone with respective ratings for water conservation functionality and ecosystem resilience. The research also explores the spatial challenges of three ecological functions and underscores the substantial impact of human activities, such as mining and agricultural expansion, on the ecological baseline. The proposed spatial arrangement for ecological restoration, termed ‘One Mountain, One Belt, Four Rivers, Five Zones, and Multiple Corridors’, strategically divides the city into eight major restoration zones, each with specific tasks and projects. Conclusion: With its significant ‘mountain-plain’ geography, Mianzhu City acts as a crucial ecological buffer for the Yangtze River's upper reaches. Future development should focus on enhancing ecological corridors in agriculture and urban areas, controlling soil erosion, and converting farmlands back to forests and grasslands to foster ecosystem rehabilitation.

Keywords: ecological restoration, resource and environmental carrying capacity, land development suitability, ecosystem services, ecological vulnerability, ecological networks

Procedia PDF Downloads 39
67 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 188
66 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 307
65 Intensity Modulated Radiotherapy of Nasopharyngeal Carcinomas: Patterns of Loco Regional Relapse

Authors: Omar Nouri, Wafa Mnejja, Nejla Fourati, Fatma Dhouib, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Jamel Daoud

Abstract:

Background and objective: Induction chemotherapy (IC) followed by concomitant chemo radiotherapy with intensity modulated radiation (IMRT) technique is actually the recommended treatment modality for locally advanced nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the prognostic factors predicting loco regional relapse with this new treatment protocol. Patients and methods: A retrospective study of 52 patients with NPC treated between June 2016 and July 2019. All patients received IC according to the protocol of the Head and Neck Radiotherapy Oncology Group (Gortec) NPC 2006 (3 TPF courses) followed by concomitant chemo radiotherapy with weekly cisplatin (40 mg / m2). Patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. Median age was 49 years (19-69) with a sex ratio of 3.3. Forty five tumors (86.5%) were classified as stages III - IV according to the 2017 UICC TNM classification. Loco regional relapse (LRR) was defined as a local and/or regional progression that occurs at least 6 months after the end of treatment. Survival analysis was performed according to Kaplan-Meier method and Log-rank test was used to compare anatomy clinical and therapeutic factors that may influence loco regional free survival (LRFS). Results: After a median follow up of 42 months, 6 patients (11.5%) experienced LRR. A metastatic relapse was also noted for 3 of these patients (50%). Target volumes coverage was optimal for all patient with LRR. Four relapses (66.6%) were in high-risk target volume and two (33.3%) were borderline. Three years LRFS was 85,9%. Four factors predicted loco regional relapses: histologic type other than undifferentiated (UCNT) (p=0.027), a macroscopic pre chemotherapy tumor volume exceeding 100 cm³ (p=0.005), a reduction in IC doses exceeding 20% (p=0.016) and a total cumulative cisplatin dose less than 380 mg/m² (p=0.0.34). TNM classification and response to IC did not impact loco regional relapses. Conclusion: For nasopharyngeal carcinoma, tumors with initial high volume and/or histologic type other than UCNT, have a higher risk of loco regional relapse. Therefore, they require a more aggressive therapeutic approaches and a suitable monitoring protocol.

Keywords: loco regional relapse, modulation intensity radiotherapy, nasopharyngeal carcinoma, prognostic factors

Procedia PDF Downloads 128
64 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach

Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik

Abstract:

Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.

Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data

Procedia PDF Downloads 350
63 The Concurrent Effect of Autistic and Schizotypal Traits on Convergent and Divergent Thinking

Authors: Ahmad Abu-Akel, Emilie De Montpellier, Sophie Von Bentivegni, Lyn Luechinger, Alessandro Ishii, Christine Mohr

Abstract:

Convergent and divergent thinking are two main components of creativity that have been viewed as complementary. While divergent thinking refers to the fluency and flexibility of generating new ideas, convergent thinking refers to the ability to systematically apply rules and knowledge to arrive at the optimal solution or idea. These creativity components have been shown to be susceptible to variation in subclinical expressions of autistic and schizotypal traits within the general population. Research, albeit inconclusively, mainly linked positive schizotypal traits with divergent thinking and autistic traits with convergent thinking. However, cumulative evidence suggests that these trait dimensions can co-occur in the same individual more than would be expected by chance and that their concurrent effect can be diametric and even interactive. The current study aimed at investigating the concurrent effect of these trait dimensions on tasks assessing convergent and divergent thinking abilities. We predicted that individuals with high positive schizotypal traits alone would perform particularly well on the divergent thinking task, whilst those with high autistic traits alone would perform particularly well on the convergent thinking task. Crucially, we also predicted that individuals who are high on both autistic and positive schizotypal traits would perform particularly well on both the divergent and convergent thinking tasks. This was investigated in a non-clinical sample of 142 individuals (Males = 45%; Mean age = 21.45, SD = 2.30), sufficient to minimally observe an effect size f² ≥ .10. Divergent thinking was evaluated using the Alternative Uses Task, and convergent thinking with the Anagrams Task. Autistic and schizotypal traits were respectively assessed with the Autism Quotient Questionnaire (AQ) and the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE). Regression analyses revealed that the positive association of autistic traits with convergent thinking scores was qualified with an interaction with positive schizotypal traits. Specifically, positive schizotypal traits were negatively associated with convergent thinking scores when AQ scores were relatively low, but this trend was reversed when AQ scores were high. Conversely, the positive effect of AQ scores on convergent thinking progressively increased with increasing positive schizotypal traits. The results of divergent thinking task are currently being analyzed and will be reported at the conference. The association of elevated autistic and positive schizotypal traits with convergent thinking may represent a unique profile of creative thinkers who are able to simultaneously draw on trait-specific advantages conferred by autistic and positively schizotypal traits such as local and global processing. This suggests that main-effect models can tell an incomplete story regarding the effect of autistic and positive schizotypal traits on creativity-related processes. Future creativity research should consider their interaction and the benefits conferred by their co-presence.

Keywords: autism, schizotypy, convergent thinking, divergent thinking, comorbidity

Procedia PDF Downloads 180
62 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
61 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements

Authors: Faustina Masocha, Vusani Moyo

Abstract:

For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.

Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory

Procedia PDF Downloads 172
60 A Minimally Invasive Approach Using Bio-Miniatures Implant System for Full Arch Rehabilitation

Authors: Omid Allan

Abstract:

The advent of ultra-narrow diameter implants initially offered an alternative to wider conventional implants. However, their design limitations have restricted their applicability primarily to overdentures and cement-retained fixed prostheses, often with unpredictable long-term outcomes. The introduction of the new Miniature Implants has revolutionized the field of implant dentistry, leading to a more streamlined approach. The utilization of Miniature Implants has emerged as a promising alternative to the traditional approach that entails the traumatic sequential bone drilling procedures and the use of conventional implants for full and partial arch restorations. The innovative "BioMiniatures Implant System serves as a groundbreaking bridge connecting mini implants with standard implant systems. This system allows practitioners to harness the advantages of ultra-small implants, enabling minimally invasive insertion and facilitating the application of fixed screw-retained prostheses, which were only available to conventional wider implant systems. This approach streamlines full and partial arch rehabilitation with minimal or even no bone drilling, significantly reducing surgical risks and complications for clinicians while minimizing patient morbidity. The ultra-narrow diameter and self-advancing features of these implants eliminate the need for invasive and technically complex procedures such as bone augmentation and guided bone regeneration (GBR), particularly in cases involving thin alveolar ridges. Furthermore, the absence of a microcap between the implant and abutment eliminates the potential for micro-leakage and micro-pumping effects, effectively mitigating the risk of marginal bone loss and future peri-implantitis. The cumulative experience of restoring over 50 full and partial arch edentulous cases with this system has yielded an outstanding success rate exceeding 97%. The long-term success with a stable marginal bone level in the study firmly establishes these implants as a dependable alternative to conventional implants, especially for full arch rehabilitation cases. Full arch rehabilitation with these implants holds the promise of providing a simplified solution for edentulous patients who typically present with atrophic narrow alveolar ridges, eliminating the need for extensive GBR and bone augmentation to restore their dentition with fixed prostheses.

Keywords: mini-implant, biominiatures, miniature implants, minimally invasive dentistry, full arch rehabilitation

Procedia PDF Downloads 74
59 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 129
58 The Potential of Role Models in Enhancing Smokers' Readiness to Change (Decision to Quit Smoking): A Case Study of Saudi National Anti-Smoking Campaign

Authors: Ghada M. AlSwayied, Anas N. AlHumaid

Abstract:

Smoking has been linked to thousands of deaths worldwide. Around three million adults continue to use tobacco each day in Saudi Arabia; a sign that smoking is prevalent among Saudi population and obviously considered as a public health threat. Although the awareness against smoking is continuously running, it can be observed that smoking behavior increases noticeably as common practice especially among young adults across the world. Therefore, it was an essential step to guess what does motivate smokers to think about quit smoking. Can a graphic and emotional ad that is focusing on health consequences do really make a difference? A case study has been conducted on the Annual Anti-Smoking National Campaign, which was provided by Saudi Ministry of Health in the period of May 2017. To assess campaign’s effects on the number of calls, the number of visits and online access to health messages during and after the campaign period from May to August compared with the previous campaign in 2016. The educational video was selected as a primary tool to deliver the smoking health message. The Minister of Health who is acting as a role model for young adults was used to deliver a direct message to smokers with an avoidance of smoking cues usage. Due to serious consequences of smoking, the Minister of Health delivered the news of canceling the media campaign and directing the budget to smoking cessation clinics. It was shown that the positive responses and interactions on the campaign were obviously remarkable; achieving a high rate of recall and recognition. During the campaign, the number of calls to book for a visit reached 45880 phone calls, and the total online views ran to 1,253,879. Whereas, clinic visit raised up to 213 cumulative percent. Interestingly, a total number of 15,192 patients visited the clinics along three months compared with the last year campaign’s period, which was merely 4850 patients. Furthermore, around half of patients who visited the clinics were in the age from 26 to 40-year-old. There was a great progress in enhancing public awareness on: 'where to go' to assist smokers in making a quit attempt. With regard to the stages of change theory, it was predicted that by following direct-message technique; the proportion of patients in the contemplation and preparation stages would be increased. There was no process evaluation obtained to assess implementation of the campaigns’ activities.

Keywords: smoking, health promotion, role model, educational material, intervention, community health

Procedia PDF Downloads 149
57 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps

Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe

Abstract:

Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.

Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion

Procedia PDF Downloads 166
56 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 319
55 Post-Harvest Biopreservation of Fruit and Vegetables with Application of Lactobacillus Strains

Authors: Judit Perjessy, Zsolt Zalan, Ferenc Hegyi, Eniko Horvath-Szanics, Krisztina Takacs, Andras Nagy, Adel Klupacs, Erika Koppany-Szabo, Zhirong Wang, Kaituo Wang, Muying Du, Jianquan Kan

Abstract:

The post-harvest diseases cause great economic losses in the fruit and vegetables; the prevention of these deterioration has great importance. Against the fungi, which cause most of the diseases, are extensively used the fungicides. However, there are increasing consumer concerns over the presence of pesticide residues in food. An alternative and in recent years, increasingly studied method for the prevention of the diseases is biocontrol, where antagonistic microorganisms are used for the control of fungi. The genera of Lactobacillus is well known and extensively studied, but its applicability as biocontrol agents in post-harvest preservation of fruit and vegetables is poorly investigated. However these bacteria can be found on the surface of the plants and have great antimicrobial activity. In our study we have investigated the chitinase activity, the antifungal effect and the applicability of several Lactobacillus strains to select potential biocontrol agents. We investigated the determination of the environmental parameters of a gene (encoding chitinase) expression and we also investigated the relationship between actual antifungal activity and potential chitinase activity. Mixed cultures were also developed to enhance the antifungal activity and determined the optimal mold spore and bacteria concentration ratio for the appropriate efficacy. Five Lactobacillus strains (L. acidophilus N2, L. delbrueckii subsp. bulgaricus B397, L. sp. 2231, L. sake subsp. sake 2471, L. buchneri 1145) possess chitinase-coding gene from the 43 investigated Lactobacillus strains. Proteins with similar molecular weight and separation properties like bacterial chitinases were detected from these strains, which also possess chitin-binding property. Nevertheless, they were inactive, lacks the chitinolytic activity. In point of the cumulative activity of inhibition, our results showed that certain strains were statistically significant in a positive direction compared to other strains, e.g., L. rhamnosus VT1 and L. Casey 154 have shown great general antifungal effect against 11 molds from the genera Penicillium and Botrytis and isolated from spoiled fruit and vegetables. Also, some mixed cultures (L. rhamnosus VT1 - L. Plantarum 299v) showed significant antifungal effects against the indigenous molds on the surface of apple fruit during the industrial storage experiment. Thus, they could be promising for post-harvest biopreservation.

Keywords: biocontrol, chitinase, Lactobacillus, post-harvest

Procedia PDF Downloads 154
54 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 165
53 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
52 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 172
51 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 297
50 Epidemiology of Gestational Choriocarcinoma: A Systematic Review

Authors: Farah Amalina Mohamed Affandi, Redhwan Ahmad Al-Naggar, Seok Mui Wang, Thanikasalam Kathiresan

Abstract:

Gestational choriocarcinoma is a condition in which there is an abnormal growth or a tumor inside the women’s uterus after conception. It is a type of gestational trophoblastic disease which is relatively rare and malignant. The current epidemiological data of this disease are inadequate. The purposes of this study are to examine the epidemiology of choriocarcinoma and their risk factors based on all available population-based and hospital-based data of the disease. In this study, we searched The MEDLINE and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases using the keywords ‘choriocarcinoma’, ‘gestational’, ‘gestational choriocarcinoma’ and ‘epidemiology’. We included only human studies published in English between 1995 and 2015 to ensure up to date evidence. Case studies, case reports, animal studies, letters to the editor, news, and review articles were excluded. Retrieved articles were screened in three phases. In the first phase, any articles that did not match the inclusion criteria based solely on titles were excluded. In the second phase, the abstracts of remaining articles were screened thoroughly; any articles that did not meet our inclusion criteria were excluded. In the final phase, full texts of the remaining articles were read and assessed to exclude articles that did not meet the inclusion criteria or any articles that fulfilled the exclusion criteria. Duplicates articles were also removed. Systematic reviews and meta-analysis were excluded. Extracted data were summarized in table and figures descriptively. The reference lists of included studies were thoroughly reviewed in search for other relevant studies. A total of ten studies met all the selection criteria. Nine were retrospective studies and one was cohort study. Total numbers of 4563 cases of choriocarcinoma were reviewed from several countries which are Korea, Japan, South Africa, USA, New Mexico, Finland, Turkey, China, Brazil and The Netherlands. Different studies included different range of age with their mean age of 28.5 to 30.0 years. All studies investigated on the disease’s incidence rate, only two studies examined on the risk factors or associations of the disease. Approximately 20% of the studies showed a reduction in the incidence of choriocarcinoma while the other 80% showed inconsistencies in rate. Associations of age, fertility age, occupations and socio-demographic with the status remains unclear. There is limited information on the epidemiological aspects of gestational choriocarcinoma. The observed results indicated there was a decrease in the incidence rate of gestational choriocarcinoma globally. These could be due to the reduction in the incidence of molar pregnancy and the efficacy of the treatment, mainly by chemotherapy.

Keywords: epidemiology, gestational choriocarcinoma, incidence, prevalence, risk factor

Procedia PDF Downloads 330
49 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers

Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri

Abstract:

Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.

Keywords: blood cell count, mandatory testing, occupational exposure, radiation

Procedia PDF Downloads 461
48 Effect of Irrigation and Hydrogel on the Water Use Efficiency of Zeto-Tiled Green-Gram Relay System in the Eastern Indo Gangetic-Plain

Authors: Benukar Biswas, S. Banerjee, P. K. Bandhyopadhyaya, S. K. Patra, S. Sarkar

Abstract:

Jute can be sown as relay crop in between the lines of 15-20 days old green gram for additional pulse yield without reducing the yield of jute. The main problem of this system is water use efficiency (WUE). The increase in water productivity and reduction in production cost were reported in the zero-tilled crop. The hydrogel can hold water up to 400 times of its weight and can release 95 % of the retained water. The present field study was carried out during 2015-16 at BCKV (tropical sub-humid, 1560 mm annual rainfall, 22058/ N, 88051/ E, 9.75 m AMSL, sandy loam soil, aeric Haplaquept, pH 6.75, organic carbon 5.4 g kg-1, available N 85 kg ha-1, P2O5 15.3 kg ha-1 and K2O 40 kg ha-1) with four levels of irrigation regimes: no irrigation - RF, cumulative pan evaporation 250mm (CPE250), CPE125 and CPE83 and three levels of hydrogel: no hydrogel (H0), 2.5 kg ha-1 (H2.5) and 5 kg ha-1 (H5). Throughout the crop growing period a linear positive relationship remained between Leaf Area Index (LAI) and evapotranspiration rate. The strength of the relationship between ETa and LAI started increasing and reached its peak at 7 WAS (R2=0.78) when green gram was at its maturity, and both the crops covered the nearly entire base area. This relation starts weakening from 13 WAS due to jute leaf shading. A linear relationship between system yield and ET was also obtained in the present study. The variation in system yield might be predicted 75% with ET alone. Effective rainfall was reduced with increasing irrigation frequency due to enhanced water supply in contrast to hydrogel application due to the difference in water storage capacity. Irrigation contributed a major source of variability of ET. Higher irrigation frequency resulted in higher ET loss ranging from 574 mm in RF to 764 mm in CPE83. Hydrogel application also increased water storage on a sustained basis and supplied to crops resulting higher ET from 639 mm in H0 to 671mm in H5. WUE ranged between 0.4 kg m-3 (RF) to 0.63 kg m-3 (CPE83 H5). WUE increased with increased application of irrigation water from 0.42 kg m-3 in RF to 0.57 kg m-3 in CPE 83. Hydrogel application significantly improves the WUE from 0.45 kg m-3 in H0 to 0.50 in H2.5 and 0.54 in H5. Under relatively dry root zone (RF), both evaporation and transpiration remain at suboptimal level resulting in lower ET as well as lower system yield. Green gram – jute relay system can be water use efficient with 38% higher yield with application of hydrogel @ 2.5 kg ha-1 under deficit irrigation regime of CPE 125 over rainfed system without application of the gel. Application of gel conditioner improved water storage, checked excess water loss from the system, and mitigated ET demand of the relay system for a longer time. Hence, irrigation frequency was reduced from five times at CPE 83 to only three times in CPE 125.

Keywords: zero tillage, deficit irrigation, hydrogel, relay system

Procedia PDF Downloads 233
47 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk

Authors: Michael Mihalicz, Aziz Guergachi

Abstract:

Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.

Keywords: decision making, emotions, prospect theory, visceral factors

Procedia PDF Downloads 149
46 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 104
45 Characterization of Herberine Hydrochloride Nanoparticles

Authors: Bao-Fang Wen, Meng-Na Dai, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Xun-Bao Yin, Yu-Han Zhao, Hong-Wei Sun, Wei-Fen Zhang

Abstract:

A drug-loaded nanoparticles containing berberine hydrochloride (BH/FA-CTS-NPs) was prepared. The physicochemical characterizations of BH/FA-CTS-NPs and the inhibitory effect on the HeLa cells were investigated. Folic acid-conjugated chitosan (FA-CTS) was prepared by amino reaction of folic acid active ester and chitosan molecules; BH/FA-CTS-NPs were prepared using ionic cross-linking technique with BH as a model drug. The morphology and particle size were determined by Transmission Electron Microscope (TEM). The average diameters and polydispersity index (PDI) were evaluated by Dynamic Light Scattering (DLS). The interaction between various components and the nanocomplex were characterized by Fourier Transform Infrared Spectroscopy (FT-IR). The entrapment efficiency (EE), drug-loading (DL) and in vitro release were studied by UV spectrophotometer. The effect of cell anti-migratory and anti-invasive actions of BH/FA-CTS-NPs were investigated using MTT assays, wound healing assays, Annexin-V-FITC single staining assays, and flow cytometry, respectively. HeLa nude mice subcutaneously transplanted tumor model was established and treated with different drugs to observe the effect of BH/FA-CTS-NPs in vivo on HeLa bearing tumor. The BH/FA-CTS-NPs prepared in this experiment have a regular shape, uniform particle size, and no aggregation phenomenon. The results of DLS showed that mean particle size, PDI and Zeta potential of BH/FA-CTS NPs were (249.2 ± 3.6) nm, 0.129 ± 0.09, 33.6 ± 2.09, respectively, and the average diameter and PDI were stable in 90 days. The results of FT-IR demonstrated that the characteristic peaks of FA-CTS and BH/FA-CTS-NPs confirmed that FA-CTS cross-linked successfully and BH was encapsulated in NPs. The EE and DL amount were (79.3 ± 3.12) % and (7.24 ± 1.41) %, respectively. The results of in vitro release study indicated that the cumulative release of BH/FA-CTS NPs was (89.48±2.81) % in phosphate-buffered saline (PBS, pH 7.4) within 48h; these results by MTT assays and wund healing assays indicated that BH/FA-CTS NPs not only inhibited the proliferation of HeLa cells in a concentration and time-dependent manner but can induce apoptosis as well. The subcutaneous xenograft tumor formation rate of human cervical cancer cell line HeLa in nude mice was 98% after inoculation for 2 weeks. Compared with BH group and BH/CTS-NPs group, the xenograft tumor growth of BH/FA-CTS-NPs group was obviously slower; the result indicated that BH/FA-CTS-NPs could significantly inhibit the growth of HeLa xenograft tumor. BH/FA-CTS NPs with the sustained release effect could be prepared successfully by the ionic crosslinking method. Considering these properties, block proliferation and impairing the migration of the HeLa cell line, BH/FA-CTS NPs could be an important compound for consideration in the treatment of cervical cancer.

Keywords: folic-acid, chitosan, berberine hydrochloride, nanoparticles, cervical cancer

Procedia PDF Downloads 122
44 Incorporating Spatial Transcriptome Data into Ligand-Receptor Analyses to Discover Regional Activation in Cells

Authors: Eric Bang

Abstract:

Interactions between receptors and ligands are crucial for many essential biological processes, including neurotransmission and metabolism. Ligand-receptor analyses that examine cell behavior and interactions often utilize cell type-specific RNA expressions from single-cell RNA sequencing (scRNA-seq) data. Using CellPhoneDB, a public repository consisting of ligands, receptors, and ligand-receptor interactions, the cell-cell interactions were explored in a specific scRNA-seq dataset from kidney tissue and portrayed the results with dot plots and heat maps. Depending on the type of cell, each ligand-receptor pair was aligned with the interacting cell type and calculated the positori probabilities of these associations, with corresponding P values reflecting average expression values between the triads and their significance. Using single-cell data (sample kidney cell references), genes in the dataset were cross-referenced with ones in the existing CellPhoneDB dataset. For example, a gene such as Pleiotrophin (PTN) present in the single-cell data also needed to be present in the CellPhoneDB dataset. Using the single-cell transcriptomics data via slide-seq and reference data, the CellPhoneDB program defines cell types and plots them in different formats, with the two main ones being dot plots and heat map plots. The dot plot displays derived measures of the cell to cell interaction scores and p values. For the dot plot, each row shows a ligand-receptor pair, and each column shows the two interacting cell types. CellPhoneDB defines interactions and interaction levels from the gene expression level, so since the p-value is on a -log10 scale, the larger dots represent more significant interactions. By performing an interaction analysis, a significant interaction was discovered for myeloid and T-cell ligand-receptor pairs, including those between Secreted Phosphoprotein 1 (SPP1) and Fibronectin 1 (FN1), which is consistent with previous findings. It was proposed that an effective protocol would involve a filtration step where cell types would be filtered out, depending on which ligand-receptor pair is activated in that part of the tissue, as well as the incorporation of the CellPhoneDB data in a streamlined workflow pipeline. The filtration step would be in the form of a Python script that expedites the manual process necessary for dataset filtration. Being in Python allows it to be integrated with the CellPhoneDB dataset for future workflow analysis. The manual process involves filtering cell types based on what ligand/receptor pair is activated in kidney cells. One limitation of this would be the fact that some pairings are activated in multiple cells at a time, so the manual manipulation of the data is reflected prior to analysis. Using the filtration script, accurate sorting is incorporated into the CellPhoneDB database rather than waiting until the output is produced and then subsequently applying spatial data. It was envisioned that this would reveal wherein the cell various ligands and receptors are interacting with different cell types, allowing for easier identification of which cells are being impacted and why, for the purpose of disease treatment. The hope is this new computational method utilizing spatially explicit ligand-receptor association data can be used to uncover previously unknown specific interactions within kidney tissue.

Keywords: bioinformatics, Ligands, kidney tissue, receptors, spatial transcriptome

Procedia PDF Downloads 139