Search results for: conditional adversarial autoencoders
50 Moderation Role of Effects of Forms of Upward versus Downward Counterfactual Reasoning on Gambling Cognition and Decision of Nigerians
Authors: Larry O. Awo, George N. Duru
Abstract:
There is growing public and mental health concerns over the availability of gambling platforms and shops in Nigeria and the high level of youth involvement in gambling. Early theorizing maintained that gambling involvement driven by the quest for resource gains. However, evidences show that the economic model of gambling tend to explain the involvement of the gambling business owners (sport lottery operators: SLOs) as most gamblers lose more than they win. This loss, according to the law of effect, ought to discourage decisions to gamble. However, the quest to recover loses has often initiated and prolonged gambling sessions. Therefore, the need to investigate mental contemplations (such as counterfactual reasoning (upward versus downward) of what “would, should, or could” have been, and feeling of the illusion of control; IOC) over gambling outcome as risk or protective factors in gambling decisions became pertinent. The present study sought to understand the differential contributions and conditional effects of upward versus downward counterfactual reasoning as pathways through which the association between IOC and gambling decision of Nigerian youths (N = 120, mean age = 18.05, SD = 3.81) could be explained. The study adopted a randomized group design, and data were obtained by means of stimulus material (the Gambling Episode; GE) and self-report measures of IOC and Gambling Decision. One-way analysis of variance (ANOVA) result showed that participants in the upward counterfactual reasoning group (M = 22.08) differed from their colleagues in the downward counterfactual reasoning group (M = 17.33) on the decision to gamble, and this difference was significant [F(1,112) = 23, P < .01]. HAYES PROCESS macro moderation analysis results showed that 1) IOC and upward counterfactual reasoning were positively associated with the decision to gamble (B = 14.21, t = 6.10, p < .01 and B = 7.22, t = 2.07, p < .01), 3) upward counterfactual reasoning did not moderate the association between IOC and gambling decision (p > .05), and 4) downward counterfactual reasoning negatively moderated the association between IOC and gambling decision (B = 07, t = 2.18, p < .05) such that the association was strong at a low level of downward counterfactual, but wane at high levels of downward counterfactual reasoning. The implication of these findings are that IOC and upward counterfactual reasoning were risk factors and promote gambling behavior, while downward counterfactual reasoning protects individuals from gambling activities. Thus, it is concluded that downward counterfactual reasoning strategies should be included in gambling therapy and treatment packages as it could diminish feelings of both IOC and negative feelings of missed positive outcomes and the urge to gamble.Keywords: counterfactual reasoning, gambling cognition, gambling decision, nigeria, youths
Procedia PDF Downloads 11349 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 8348 An Assessment of the Impacts of Agro-Ecological Practices towards the Improvement of Crop Health and Yield Capacity: A Case of Mopani District, Limpopo, South Africa
Authors: Tshilidzi C. Manyanya, Nthaduleni S. Nethengwe, Edmore Kori
Abstract:
The UNFCCC, FAO, GCF, IPCC and other global structures advocate for agro-ecology do address food security and sovereignty. However, most of the expected outcomes concerning agro-ecological were not empirically tested for universal application. Agro-ecology is theorised to increase crop health over ago-ecological farms and decrease over conventional farms. Increased crop health means increased carbon sequestration and thus less CO2 in the atmosphere. This is in line with the view that global warming is anthropogenically enhanced through GHG emissions. Agro-ecology mainly affects crop health, soil carbon content and yield on the cultivated land. Economic sustainability is directly related to yield capacity, which is theorized to increase by 3-10% in a space of 3 - 10 years as a result of agro-ecological implementation. This study aimed to empirically assess the practicality and validity of these assumptions. The study utilized mainly GIS and RS techniques to assess the effectiveness of agro-ecology in crop health improvement from satellite images. The assessment involved a longitudinal study (2013 – 2015) assessing the changes that occur after a farm retrofits from conventional agriculture to agro-ecology. The assumptions guided the objectives of the study. For each objective, an agro-ecological farm was compared with a conventional farm in the same climatic conditional occupying the same general location. Crop health was assessed using satellite images analysed through ArcGIS and Erdas. This entailed the production of NDVI and Re-classified outputs of the farm area. The NDVI ranges of the entire period of study were thus compared in a stacked histogram for each farm to assess for trends. Yield capacity was calculated based on the production records acquired from the farmers and plotted in a stacked bar graph as percentages of a total for each farm. The results of the study showed decreasing crop health trends over 80% of the conventional farms and an increase over 80% of the organic farms. Yield capacity showed similar patterns to those of crop health. The study thus showed that agro-ecology is an effective strategy for crop-health improvement and yield increase.Keywords: agro-ecosystem, conventional farm, dialectical, sustainability
Procedia PDF Downloads 21847 Balance Control Mechanisms in Individuals With Multiple Sclerosis in Virtual Reality Environment
Authors: Badriah Alayidi, Emad Alyahya
Abstract:
Background: Most people with Multiple Sclerosis (MS) report worsening balance as the condition progresses. Poor balance control is also well known to be a significant risk factor for both falling and fear of falling. The increased risk of falls with disease progression thus makes balance control an essential target of gait rehabilitation amongst people with MS. Intervention programs have developed various methods to improve balance control, and accumulating evidence suggests that exercise programs may help people with MS improve their balance. Among these methods, virtual reality (VR) is growing in popularity as a balance-training technique owing to its potential benefits, including better compliance and greater user happiness. However, it is not clear if a VR environment will induce different balance control mechanisms in MS as compared to healthy individuals or traditional environments. Therefore, this study aims to examine how individuals with MS control their balance in a VR setting. Methodology: The proposed study takes an empirical approach to estimate and determine the role of balance response in persons with MS using a VR environment. It will use primary data collected through patient observations, physiological and biomechanical evaluation of balance, and data analysis. Results: The preliminary systematic review and meta-analysis indicated that there was variability in terms of the outcome assessing balance response in people with MS. The preliminary results of these assessments have the potential to provide essential indicators of the progression of MS and contribute to the individualization of treatment and evaluation of the interventions’ effectiveness. The literature describes patients who have had the opportunity to experiment in VR settings and then used what they have learned in the real world, suggesting that this VR setting could be more appealing than conditional settings. The findings of the proposed study will be beneficial in estimating and determining the effect of VR on balance control in persons with MS. In previous studies, VR was shown to be an interesting approach to neurological rehabilitation, but more data are needed to support this approach in MS. Conclusions: The proposed study enables an assessment of balance and evaluations of a variety of physiological implications related to neural activity as well as biomechanical implications related to movement analysis.Keywords: multiple sclerosis, virtual reality, postural control, balance
Procedia PDF Downloads 8046 Analyzing the Connection between Productive Structure and Communicable Diseases: An Econometric Panel Study
Authors: Julio Silva, Lia Hasenclever, Gilson G. Silva Jr.
Abstract:
The aim of this paper is to check possible convergence in health measures (aged-standard rate of morbidity and mortality) for communicable diseases between developed and developing countries, conditional to productive structures features. Understanding the interrelations between health patterns and economic development is particularly important in the context of low- and middle-income countries, where economic development comes along with deep social inequality. Developing countries with less diversified productive structures (measured through complexity index) but high heterogeneous inter-sectorial labor productivity (using as a proxy inter-sectorial coefficient of variation of labor productivity) has on average low health levels in communicable diseases compared to developed countries with high diversified productive structures and low labor market heterogeneity. Structural heterogeneity and productive diversification may have influence on health levels even considering per capita income. We set up a panel data for 139 countries from 1995 to 2015, joining several data about the countries, as economic development, health, and health system coverage, environmental and socioeconomic aspects. This information was obtained from World Bank, International Labour Organization, Atlas of Economic Complexity, United Nation (Development Report) and Institute for Health Metrics and Evaluation Database. Econometric panel models evidence shows that the level of communicable diseases has a positive relationship with structural heterogeneity, even considering other factors as per capita income. On the other hand, the recent process of convergence in terms of communicable diseases have been motivated for other reasons not directly related to productive structure, as health system coverage and environmental aspects. These evidences suggest a joint dynamics between the unequal distribution of communicable diseases and countries' productive structure aspects. These set of evidence are quite important to public policy as meet the health aims in Millennium Development Goals. It also highlights the importance of the process of structural change as fundamental to shift the levels of health in terms of communicable diseases and can contribute to the debate between the relation of economic development and health patterns changes.Keywords: economic development, inequality, population health, structural change
Procedia PDF Downloads 14845 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate
Procedia PDF Downloads 18044 Impact of Diabetes Mellitus Type 2 on Clinical In-Stent Restenosis in First Elective Percutaneous Coronary Intervention Patients
Authors: Leonard Simoni, Ilir Alimehmeti, Ervina Shirka, Endri Hasimi, Ndricim Kallashi, Verona Beka, Suerta Kabili, Artan Goda
Abstract:
Background: Diabetes Mellitus type 2, small vessel calibre, stented length of vessel, complex lesion morphology, and prior bypass surgery have resulted risk factors for In-Stent Restenosis (ISR). However, there are some contradictory results about body mass index (BMI) as a risk factor for ISR. Purpose: We want to identify clinical, lesional and procedural factors that can predict clinical ISR in our patients. Methods: Were enrolled 759 patients who underwent first-time elective PCI with Bare Metal Stents (BMS) from September 2011 to December 2013 in our Department of Cardiology and followed them for at least 1.5 years with a median of 862 days (2 years and 4 months). Only the patients re-admitted with ischemic heart disease underwent control coronary angiography but no routine angiographic control was performed. Patients were categorized in ISR and non-ISR groups and compared between them. Multivariate analysis - Binary Logistic Regression: Forward Conditional Method was used to identify independent predictive risk factors. P was considered statistically significant when <0.05. Results: ISR compared to non-ISR individuals had a significantly lower BMI (25.7±3.3 vs. 26.9±3.7, p=0.004), higher risk anatomy (LM + 3-vessel CAD) (23% vs. 14%, p=0.03), higher number of stents/person used (2.1±1.1 vs. 1.75±0.96, p=0.004), greater length of stents/person used (39.3±21.6 vs. 33.3±18.5, p=0.01), and a lower use of clopidogrel and ASA (together) (95% vs. 99%, p=0.012). They also had a higher, although not statistically significant, prevalence of Diabetes Mellitus (42% vs. 32%, p=0.072) and a greater number of treated vessels (1.36±0.5 vs. 1.26±0.5, p=0.08). In the multivariate analysis, Diabetes Mellitus type 2 and multiple stents used were independent predictors risk factors for In-Stent Restenosis, OR 1.66 [1.03-2.68], p=0.039, and OR 1.44 [1.16-1.78,] p=0.001, respectively. On the other side higher BMI and use of clopidogrel and ASA together resulted protective factors OR 0.88 [0.81-0.95], p=0.001 and OR 0.2 [0.06-0.72] p=0.013, respectively. Conclusion: Diabetes Mellitus and multiple stents are strong predictive risk factors, whereas the use of clopidogrel and ASA together are protective factors for clinical In-Stent Restenosis. Paradoxically High BMI is a protective factor for In-stent Restenosis, probably related to a larger diameter of vessels and consequently a larger diameter of stents implanted in these patients. Further studies are needed to clarify this finding.Keywords: body mass index, diabetes mellitus, in-stent restenosis, percutaneous coronary intervention
Procedia PDF Downloads 21343 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 2042 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 17641 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 8140 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 30539 Social Factors That Contribute to Promoting and Supporting Resilience in Children and Youth following Environmental Disasters: A Mixed Methods Approach
Authors: Caroline McDonald-Harker, Julie Drolet
Abstract:
Abstract— In the last six years Canada In the last six years Canada has experienced two major and catastrophic environmental disasters– the 2013 Southern Alberta flood and the 2016 Fort McMurray, Alberta wildfire. These two disasters resulted in damages exceeding 12 billion dollars, the costliest disasters in Canadian history. In the aftermath of these disasters, many families faced the loss of homes, places of employment, schools, recreational facilities, and also experienced social, emotional, and psychological difficulties. Children and youth are among the most vulnerable to the devastating effects of disasters due to the physical, cognitive, and social factors related to their developmental life stage. Yet children and youth also have the capacity to be resilient and act as powerful catalyst for change in their own lives and wider communities following disaster. Little is known, particularly from a sociological perspective, about the specific factors that contribute to resilience in children and youth, and effective ways to support their overall health and well-being. This paper focuses on the voices and experiences of children and youth residing in these two disaster-affected communities in Alberta, Canada and specifically examines: 1) How children and youth’s lives are impacted by the tragedy, devastation, and upheaval of disaster; 2) Ways that children and youth demonstrate resilience when directly faced with the adversarial circumstances of disaster; and 3) The cumulative internal and external factors that contribute to bolstering and supporting resilience among children and youth post-disaster. This paper discusses the characteristics associated with high levels of resilience in 183 children and youth ages 5 to 17 based on quantitative and qualitative data obtained through a mix methods approach. Child and youth participants were administered the Children and Youth Resilience Measure (CYRM-28) in order to examine factors that influence resilience processes including: individual, caregiver, and context factors. The CYRM-28 was then supplemented with qualitative interviews with children and youth to contextualize the CYRM-28 resiliency factors and provide further insight into their overall disaster experience. Findings reveal that high levels of resilience among child and youth participants is associated with both individual factors and caregiver factors, specifically positive outlook, effective communication, peer support, and physical and psychological caregiving. Individual and caregiver factors helped mitigate the negative effects of disaster, thus bolstering resilience in children and youth. This paper discusses the implications that these findings have for understanding the specific mechanisms that support the resiliency processes and overall recovery of children and youth following disaster; the importance of bridging the gap between children and youth’s needs and the services and supports provided to them post-disaster; and the need to develop resiliency processes and practices that empower children and youth as active agents of change in their own lives following disaster. These findings contribute to furthering knowledge about pragmatic and representative changes to resources, programs, and policies surrounding disaster response, recovery, and mitigation.Keywords: children and youth, disaster, environment, resilience
Procedia PDF Downloads 13038 Innovations and Challenges: Multimodal Learning in Cybersecurity
Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley
Abstract:
There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.Keywords: cybersecurity, new york, city college, graduate degree, master of science
Procedia PDF Downloads 15337 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 17436 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa
Authors: Dimakatso Machetele, Kowiyou Yessoufou
Abstract:
Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.Keywords: road accidents, South Africa, statistical modelling, trends
Procedia PDF Downloads 16535 ‘Transnationalism and the Temporality of Naturalized Citizenship
Authors: Edward Shizha
Abstract:
Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship
Procedia PDF Downloads 8134 A Matched Case-Control Study to Asses the Association of Chikunguynya Severity among Blood Groups and Other Determinants in Tesseney, Gash Barka Zone, Eritrea
Authors: Ghirmay Teklemicheal, Samsom Mehari, Sara Tesfay
Abstract:
Objectives: A total of 1074 suspected chikungunya cases were reported in Tesseney Province, Gash Barka region, Eritrea, during an outbreak. This study was aimed to assess the possible association of chikungunya severity among ABO blood groups and other potential determinants. Methods: A sex-matched and age-matched case-control study was conducted during the outbreak. For each case, one control subject had been selected from the mild Chikungunya cases. Along the same line of argument, a second control subject had also been designated through which neighborhood of cases were analyzed, scrutinized, and appeared to the scheme of comparison. Time is always the most sacrosanct element in pursuance of any study. According to the temporal calculation, this study was pursued from October 15, 2018, to November 15, 2018. Coming to the methodological dependability, calculating odds ratios (ORs) and conditional (fixed-effect) logistic regression methods were being applied. As a consequence of this, the data was analyzed and construed on the basis of the aforementioned methodological systems. Results: In this outbreak, 137 severe suspected chikungunya cases and 137 mild chikungunya suspected patients, and 137 controls free of chikungunya from the neighborhood of cases were analyzed. Non-O individuals compared to those with O blood group indicated as significant with a p-value of 0.002. Separate blood group comparison among A and O blood groups reflected as significant with a p-value of 0.002. However, there was no significant difference in the severity of chikungunya among B, AB, and O blood groups with a p-value of 0.113 and 0.708, respectively, and a strong association of chikungunya severity was found with hypertension and diabetes (p-value of < 0.0001); whereas, there was no association between chikungunya severity and asthma with a p-value of 0.695 and also no association with pregnancy (p-value =0.881), ventilator (p-value =0.181), air conditioner (p-value = 0.247), and didn’t use latrine and pit latrine (p-value = 0.318), among individuals using septic and pit latrine (p-value = 0.567) and also among individuals using flush and pit latrine (p-value = 0.194). Conclusions: Non- O blood groups were found to be at risk more than their counterpart O blood group individuals with severe form of chikungunya disease. By the same token, individuals with chronic disease were more prone to severe forms of the disease in comparison with individuals without chronic disease. Prioritization is recommended for patients with chronic diseases and non-O blood group since they are found to be susceptible to severe chikungunya disease. Identification of human cell surface receptor(s) for CHIKV is quite necessary for further understanding of its pathophysiology in humans. Therefore, molecular and functional studies will necessarily be helpful in disclosing the association of blood group antigens and CHIKV infections.Keywords: Chikungunya, Chikungunya virus, disease outbreaks, case-control studies, Eritrea
Procedia PDF Downloads 17033 India’s Foreign Policy toward its South Asian Neighbors: Retrospect and Prospect
Authors: Debasish Nandy
Abstract:
India’s foreign policy towards all of her neighbor countries is determinate on the basis of multi-dimensional factors. India’s relations with its South Asian neighbor can be classified into three categories. In the first category, there are four countries -Sri Lanka, Bangladesh, Nepal, and Afghanistan- whose bilateral relationships have encompassed cooperation, irritants, problems and crisis at different points in time. With Pakistan, the relationship has been perpetually adversarial. The third category includes Bhutan and Maldives whose relations are marked by friendship and cooperation, free of any bilateral problems. It is needless to say that Jawaharlal Nehru emphasized on friendly relations with the neighboring countries. The subsequent Prime Ministers of India especially I.K. Gujral had advocated in making of peaceful and friendly relations with the subcontinental countries. He had given a unique idea to foster bilateral relations with the neighbors. His idea is known as ‘Gujral Doctrine’. A dramatical change has been witnessed in Indian foreign policy since 1991.In the post-Cold War period, India’s national security has been vehemently threatened by terrorism, which originated from Pakistan-Afghanistan and partly Bangladesh. India has required a cooperative security, which can be made by mutual understanding among the South Asian countries. Additionally, the countries of South Asia need to evolve the concept of ‘Cooperative Security’ to explain the underlying logic of regional cooperation. According to C. Rajamohan, ‘cooperative security could be understood, as policies of governments, which see themselves as former adversaries or potential adversaries to shift from or avoid confrontationist policies.’ A cooperative security essentially reflects a policy of dealing peacefully with conflicts, not merely by abstention from violence or threats but by active engagement in negotiation, a search for practical solutions and with a commitment to preventive measures. Cooperative assumes the existence of a condition in which the two sides possess the military capabilities to harm each other. Establishing cooperative security runs into a complex process building confidence. South Asian nations often engaged with hostility to each other. Extra-regional powers have been influencing their powers in this region since a long time. South Asian nations are busy to purchase military equipment. In spite of weakened economic systems, these states are spending a huge amount of money for their security. India is the big power in this region in every aspect. The big states- small states syndrome is a negative factor in this respect. However, India will have to an initiative to extended ‘track II diplomacy’ or soft diplomacy for its security as well as the security of this region.Confidence building measures could help rejuvenate not only SAARC but also build trust and mutual confidence between India and its neighbors in South Asia. In this paper, I will focus on different aspects of India’s policy towards it, South-Asian neighbors. It will also be searched that how India is dealing with these countries by using a mixed type of diplomacy – both idealistic and realistic points of view. Security and cooperation are two major determinants of India’s foreign policy towards its South Asian neighbors.Keywords: bilateral, diplomacy, infiltration, terrorism
Procedia PDF Downloads 54132 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 31131 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 16730 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 8629 Media Response to Kashmir Conflict: How Press Differed in Highlighting Protest Shutdowns between 1990-2010
Authors: Danish Gadda
Abstract:
Kashmir has been a bleeding-spot in the South Asian politics since 1947 when the subcontinent was bifurcated into Hindu, India and Muslim Pakistan by the departing British colonisers. Kashmir couldn’t accede to either of the two new-born, sovereign nations until tribal invasion from Pakistan forced an unfortunate change of events. India, driven by conditional accession signed by the Kashmir’s last monarch, sent its army to defend Kashmir Valley, with a promise, made subsequently, that the region’s fate would be decided by the natives through an internationally-monitored plebiscite. The country, however, broke its promise, choosing not to withdraw its military to allow the plebiscite, and, instead, strengthened its claim over Kashmir, which it later started describing as her integral part. War, fought in the shape of three and a half bloody battles, ensued between India and Pakistan, even as the United Nations’ intervention managed a ceasefire as early as in the 1950s, though not before Kashmir had come to be divided into its India-controlled and Pakistan-controlled halves. Prolonged, the dispute over Kashmir took a violent turn in 1989-90 with the start of an anti-India armed rebellion. Kashmiris have been fighting for their right to self-determination, and bringing their own life to a grinding halt has been one of their preferred forms of protest against the Indian rule. This form of resistance is locally called ‘Hartals’, and recognised as shutdowns, which have often been prolonged and violent. Since 1989-90, the shutdowns have become only more frequent and forceful, and there are marked days on which Kashmir shuts down in protest every year, like a ritual. This paper is based on a study of how the Indian and Kashmir press covered the shutdowns observed in the troubled valley on four such days: January 26 (Indian Republic Day), February 11 (the day on which India executed a prominent Kashmiri resistance leader), August 15 (India’s Independence Day), and October 27 (the day on which the Indian military has landed in Kashmir). The coverage given by the Indian and Kashmiri press to the shutdowns observed on these days has been studied using the multi-tier content analysis approach: 1) Difference in the number of shutdowns covered by the two section is looked at, 2) the placement of the stories in the two section of the press is analysed, 3) the discourse highlighted by the two section of the press is compared, and 4) the editorials written by the two section of the press about the shutdowns are analysed. The findings show the Indian and the local press have been focussing on the two, predictable extremes of the situation: the Indian press has favoured the state, while the Kashmir or the local press has focussed on the narrative opposing the state’s. The difference is noticed in the quantitative as well as the qualitative aspects of their coverage.Keywords: Indo-Pak tension, Kashmir conflict, protest shutdowns, South-Asian politics
Procedia PDF Downloads 23428 Regulatory Governance as a De-Parliamentarization Process: A Contextual Approach to Global Constitutionalism and Its Effects on New Arab Legislatures
Authors: Abderrahim El Maslouhi
Abstract:
The paper aims to analyze an often-overlooked dimension of global constitutionalism, which is the rise of the regulatory state and its impact on parliamentary dynamics in transition regimes. In contrast to Majone’s technocratic vision of convergence towards a single regulatory system based on competence and efficiency, national transpositions of regulatory governance and, in general, the relationship to global standards primarily depend upon a number of distinctive parameters. These include policy formation process, speed of change, depth of parliamentary tradition and greater or lesser vulnerability to the normative conditionality of donors, interstate groupings and transnational regulatory bodies. Based on a comparison between three post-Arab Spring countries -Morocco, Tunisia, and Egypt, whose constitutions have undergone substantive review in the period 2011-2014- and some European Union state members, the paper intends, first, to assess the degree of permeability to global constitutionalism in different contexts. A noteworthy divide emerges from this comparison. Whereas European constitutions still seem impervious to the lexicon of global constitutionalism, the influence of the latter is obvious in the recently drafted constitutions in Morocco, Tunisia, and Egypt. This is evidenced by their reference to notions such as ‘governance’, ‘regulators’, ‘accountability’, ‘transparency’, ‘civil society’, and ‘participatory democracy’. Second, the study will provide a contextual account of internal and external rationales underlying the constitutionalization of regulatory governance in the cases examined. Unlike European constitutionalism, where parliamentarism and the tradition of representative government function as a structural mechanism that moderates the de-parliamentarization effect induced by global constitutionalism, Arab constitutional transitions have led to a paradoxical situation; contrary to the public demands for further parliamentarization, the 2011 constitution-makers have opted for a de-parliamentarization pattern. This is particularly reflected in the procedures established by constitutions and regular legislation, to handle the interaction between lawmakers and regulatory bodies. Once the ‘constitutional’ and ‘independent’ nature of these agencies is formally endorsed, the birth of these ‘fourth power’ entities, which are neither elected nor directly responsible to elected officials, will raise the question of their accountability. Third, the paper shows that, even in the three selected countries, the de-parliamentarization intensity is significantly variable. By contrast to the radical stance of the Moroccan and Egyptian constituents who have shown greater concern to shield regulatory bodies from legislatures’ scrutiny, the Tunisian case indicates a certain tendency to provide lawmakers with some essential control instruments (e. g. exclusive appointment power, adversarial discussion of regulators’ annual reports, dismissal power, later held unconstitutional). In sum, the comparison reveals that the transposition of the regulatory state model and, more generally, sensitivity to the legal implications of global conditionality essentially relies on the evolution of real-world power relations at both national and international levels.Keywords: Arab legislatures, de-parliamentarization, global constitutionalism, normative conditionality, regulatory state
Procedia PDF Downloads 14127 Analysis of Unconditional Conservatism and Earnings Quality before and after the IFRS Adoption
Authors: Monica Santi, Evita Puspitasari
Abstract:
International Financial Reporting Standard (IFRS) has developed the principle based accounting standard. Based on this, IASB then eliminated the conservatism concept within accounting framework. Conservatism concept represents a prudent reaction to uncertainty to try to ensure that uncertainties and risk inherent in business situations are adequately considered. The conservatism concept has two ingredients: conditional conservatism or ex-post (news depending prudence) and unconditional conservatism or ex-ante (news-independent prudence). IFRS in substance disregards the unconditional conservatism because the unconditional conservatism can cause the understatement assets or overstated liabilities, and eventually the financial statement would be irrelevance since the information does not represent the real fact. Therefore, the IASB eliminate the conservatism concept. However, it does not decrease the practice of unconditional conservatism in the financial statement reporting. Therefore, we expected the earnings quality would be affected because of this situation, even though the IFRS implementation was expected to increase the earnings quality. The objective of this study was to provide empirical findings about the unconditional conservatism and the earnings quality before and after the IFRS adoption. The earnings per accrual measure were used as the proxy for the unconditional conservatism. If the earnings per accrual were negative (positive), it meant the company was classified as the conservative (not conservative). The earnings quality was defined as the ability of the earnings in reflecting the future earnings by considering the earnings persistence and stability. We used the earnings response coefficient (ERC) as the proxy for the earnings quality. ERC measured the extant of a security’s abnormal market return in response to the unexpected component of reporting earning of the firm issuing that security. The higher ERC indicated the higher earnings quality. The manufacturing companies listed in the Indonesian Stock Exchange (IDX) were used as the sample companies, and the 2009-2010 period was used to represent the condition before the IFRS adoption, and 2011-2013 was used to represent the condition after the IFRS adoption. Data was analyzed using the Mann-Whitney test and regression analysis. We used the firm size as the control variable with the consideration the firm size would affect the earnings quality of the company. This study had proved that the unconditional conservatism had not changed, either before and after the IFRS adoption period. However, we found the different findings for the earnings quality. The earnings quality had decreased after the IFRS adoption period. This empirical results implied that the earnings quality before the IFRS adoption was higher. This study also had found that the unconditional conservatism positively influenced the earnings quality insignificantly. The findings implied that the implementation of the IFRS had not decreased the unconditional conservatism practice and has not altered the earnings quality of the manufacturing company. Further, we found that the unconditional conservatism did not affect the earnings quality. Eventhough the empirical result shows that the unconditional conservatism gave positive influence to the earnings quality, but the influence was not significant. Thus, we concluded that the implementation of the IFRS did not increase the earnings quality.Keywords: earnings quality, earnings response coefficient, IFRS Adoption, unconditional conservatism
Procedia PDF Downloads 26426 Conditional Relation between Migration, Demographic Shift and Human Development in India
Authors: Rakesh Mishra, Rajni Singh, Mukunda Upadhyay
Abstract:
Since the last few decades, the prima facie of development has shifted towards the working population in India. There has been a paradigm shift in the development approach with the realization that the present demographic dividend has to be harnessed for sustainable development. Rapid urbanization and improved socioeconomic characteristics experienced within its territory has catalyzed various forms of migration into it, resulting in massive transference of workforce between its states. Workforce in any country plays a very crucial role in deciding development of both the places, from where they have out-migrated and the place they are residing currently. In India, people are found to be migrating from relatively less developed states to a well urbanized and developed state for satisfying their neediness. Linking migration to HDI at place of destination, the regression coefficient (β ̂) shows affirmative association between them, because higher the HDI of the place would be, higher would be chance of earning and hence likeliness of the migrants would be more to choose that place as a new destination and vice versa. So the push factor is compromised by the cost of rearing and provides negative impulse on the in migrants letting down their numbers to metro cities or megacities of the states but increasing their mobility to the suburban areas and vice versa. The main objective of the study is to check the role of migration in deciding the dividend of the place of destination as well as people at the place of their usual residence with special focus to highly urban states in India. Idealized scenario of Indian migrants refers to some new theories in making. On analyzing the demographic dividend of the places we got to know that Uttar Pradesh provides maximum dividend to Maharashtra, West Bengal and Delhi, and the demographic divided of migrants are quite comparable to the native’s shares in the demographic dividend in these places. On analyzing the data from National Sample Survey 64th round and Census of India-2001, we have observed that for males in rural areas, the share of unemployed person declined by 9 percentage points (from 45% before migration to 36 % after migration) and for females in rural areas the decline was nearly 12 percentage points (from 79% before migration to 67% after migration. It has been observed that the shares of unemployed males in both rural and urban areas, which were significant before migration, got reduced after migration while the share of unemployed females in the rural as well as in the urban areas remained almost negligible both for before and after migration. So increase in the number of employed after migration provides an indication of changes in the associated cofactors like health and education of the place of destination and arithmetically to the place from where they have migrated out. This paper presents the evidence on the patterns of prevailing migration dynamics and corresponding demographic benefits in India and its states, examines trends and effects, and discusses plausible explanations.Keywords: migration, demographic shift, human development index, multilevel analysis
Procedia PDF Downloads 39025 The Impact of Team Heterogeneity and Team Reflexivity on Entrepreneurial Decision -Making - Empirical Study in China
Authors: Chang Liu, Rui Xing, Liyan Tang, Guohong Wang
Abstract:
Entrepreneurial actions are based on entrepreneurial decisions. The quality of decisions influences entrepreneurial activities and subsequent new venture performance. Uncertainty of surroundings put heightened demands on the team as a whole, and each team member. Diverse team composition provides rich information, which a team can draw when making complex decisions. However, team heterogeneity may cause emotional conflicts, which is adverse to team outcomes. Thus, the effects of team heterogeneity on team outcomes are complex. Although team heterogeneity is an essential factor influencing entrepreneurial decision-making, there is a lack of empirical analysis on under what conditions team heterogeneity plays a positive role in promoting decision-making quality. Entrepreneurial teams always struggle with complex tasks. How a team shapes its teamwork is key in resolving constant issues. As a collective regulatory process, team reflexivity is characterized by continuous joint evaluation and discussion of team goals, strategies, and processes, and adapt them to current or anticipated circumstances. It enables diversified information to be shared and overtly discussed. Instead of hostile interpretation of opposite opinions team members take them as useful insights from different perspectives. Team reflexivity leads to better integration of expertise to avoid the interference of negative emotions and conflict. Therefore, we propose that team reflexivity is a conditional factor that influences the impact of team heterogeneity on high-quality entrepreneurial decisions. In this study, we identify team heterogeneity as a crucial determinant of entrepreneurial decision quality. Integrating the literature on decision-making and team heterogeneity, we investigate the relationship between team heterogeneity and entrepreneurial decision-making quality, treating team reflexivity as a moderator. We tested our hypotheses using the hierarchical regression method and the data gathered from 63 teams and 205 individual members from 45 new firms in China's first-tier cities such as Beijing, Shanghai, and Shenzhen. This research found that both teams' education heterogeneity and teams' functional background heterogeneity were significantly positively related to entrepreneurial decision-making quality, and the positive relation was stronger in teams with a high level of team reflexivity. While teams' specialization of education heterogeneity was negatively related to decision-making quality, and the negative relationship was weaker in teams with a high level of team reflexivity. We offer two contributions to decision-making and entrepreneurial team literatures. Firstly, our study enriches the understanding of the role of entrepreneurial team heterogeneity in entrepreneurial decision-making quality. Different from previous entrepreneurial decision-making literatures, which focus more on decision-making modes of entrepreneurs and the top management team, this study is a significant attempt to highlight that entrepreneurial team heterogeneity makes a unique contribution to generating high-quality entrepreneurial decisions. Secondly, this study introduced team reflexivity as the moderating variable, to explore the boundary conditions under which the entrepreneurial team heterogeneity play their roles.Keywords: decision-making quality, entrepreneurial teams, education heterogeneity, functional background heterogeneity, specialization of education heterogeneity
Procedia PDF Downloads 12224 Healthcare Associated Infections in an Intensive Care Unit in Tunisia: Incidence and Risk Factors
Authors: Nabiha Bouafia, Asma Ben Cheikh, Asma Ammar, Olfa Ezzi, Mohamed Mahjoub, Khaoula Meddeb, Imed Chouchene, Hamadi Boussarsar, Mansour Njah
Abstract:
Background: Hospital acquired infections (HAI) cause significant morbidity, mortality, length of stay and hospital costs, especially in the intensive care unit (ICU), because of the debilitated immune systems of their patients and exposure to invasive devices. The aims of this study were to determine the rate and the risk factors of HAI in an ICU of a university hospital in Tunisia. Materials/Methods: A prospective study was conducted in the 8-bed adult medical ICU of a University Hospital (Sousse Tunisia) during 14 months from September 15th, 2015 to November 15th, 2016. Patients admitted for more than 48h were included. Their surveillance was stopped after the discharge from ICU or death. HAIs were defined according to standard Centers for Disease Control and Prevention criteria. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: During the study, 192 patients had admitted for more than 48 hours. Their mean age was 59.3± 18.20 years and 57.1% were male. Acute respiratory failure was the main reason of admission (72%). The mean SAPS II score calculated at admission was 32.5 ± 14 (range: 6 - 78). The exposure to the mechanical ventilation (MV) and the central venous catheter were observed in 169 (88 %) and 144 (75 %) patients, respectively. Seventy-three patients (38.02%) developed 94 HAIs. The incidence density of HAIs was 41.53 per 1000 patient day. Mortality rate in patients with HAIs was 65.8 %( n= 48). Regarding the type of infection, Ventilator Associated Pneumoniae (VAP) and central venous catheter Associated Infections (CVC AI) were the most frequent with Incidence density: 14.88/1000 days of MV for VAP and 20.02/1000 CVC days for CVC AI. There were 5 Peripheral Venous Catheter Associated Infections, 2 urinary tract infections, and 21 other HAIs. Gram-negative bacteria were the most common germs identified in HAIs: Multidrug resistant Acinetobacter Baumanii (45%) and Klebsiella pneumoniae (10.96%) were the most frequently isolated. Univariate analysis showed that transfer from another hospital department (p= 0.001), intubation (p < 10-4), tracheostomy (p < 10-4), age (p=0.028), grade of acute respiratory failure (p=0.01), duration of sedation (p < 10-4), number of CVC (p < 10-4), length of mechanical ventilation (p < 10-4) and length of stay (p < 10-4), were associated to high risk of HAIS in ICU. Multivariate analysis reveals that independent risk factors for HAIs are: transfer from another hospital department: OR=13.44, IC 95% [3.9, 44.2], p < 10-4, duration of sedation: OR= 1.18, IC 95% [1.049, 1.325], p=0.006, high number of CVC: OR=2.78, IC 95% [1.73, 4.487], p < 10-4, and length of stay in ICU: OR= 1.14, IC 95% [1.066,1.22], p < 10-4. Conclusion: Prevention of nosocomial infections in ICUs is a priority of health care systems all around the world. Yet, their control requires an understanding of epidemiological data collected in these units.Keywords: healthcare associated infections, incidence, intensive care unit, risk factors
Procedia PDF Downloads 37023 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 25422 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11821 How Can Food Retailing Benefit from Neuromarketing Research: The Influence of Traditional and Innovative Tools of In-Store Communication on Consumer Reactions
Authors: Jakub Berčík, Elena Horská, Ľudmila Nagyová
Abstract:
Nowadays, the point of sale remains one of the few channels of communication which is not oversaturated yet and has great potential for the future. The fact that purchasing decisions are significantly affected by emotions, while up to 75 % of them are implemented at the point of sale, only demonstrates its importance. The share of impulsive purchases is about 60-75 %, depending on the particular product category. Nevertheless, habits predetermine the content of the shopping cart above all and hence in this regard the role of in-store communication is to disrupt the routine and compel the customer to try something new. This is the reason why it is essential to know how to work with this relatively young branch of marketing communication as efficiently as possible. New global trend in this discipline is evaluating the effectiveness of particular tools in the in-store communication. To increase the efficiency it is necessary to become familiar with the factors affecting the customer both consciously and unconsciously, and that is a task for neuromarketing and sensory marketing. It is generally known that the customer remembers the negative experience much longer and more intensely than the positive ones, therefore it is essential for marketers to avoid this negative experience. The final effect of POP (Point of Purchase) or POS (Point of Sale) tools is conditional not only on their quality and design, but also on the location at the point of sale which contributes to the overall positive atmosphere in the store. Therefore, in-store advertising is increasingly in the center of attention and companies are willing to spend even a third of their marketing communication budget on it. The paper deals with a comprehensive, interdisciplinary research of the impact of traditional as well as innovative tools of in-store communication on the attention and emotional state (valence and arousal) of consumers on the food market. The research integrates measurements with eye camera (Eye tracker) and electroencephalograph (EEG) in real grocery stores as well as in laboratory conditions with the purpose of recognizing attention and emotional response among respondents under the influence of selected tools of in-store communication. The object of the research includes traditional (e.g. wobblers, stoppers, floor graphics) and innovative (e.g. displays, wobblers with LED elements, interactive floor graphics) tools of in-store communication in the fresh unpackaged food segment. By using a mobile 16-channel electroencephalograph (EEG equipment) from the company EPOC, a mobile eye camera (Eye tracker) from the company Tobii and a stationary eye camera (Eye tracker) from the company Gazepoint, we observe the attention and emotional state (valence and arousal) to reveal true consumer preferences using traditional and new unusual communication tools at the point of sale of the selected foodstuffs. The paper concludes with suggesting possibilities for rational, effective and energy-efficient combination of in-store communication tools, by which the retailer can accomplish not only captivating and attractive presentation of displayed goods, but ultimately also an increase in retail sales of the store.Keywords: electroencephalograph (EEG), emotion, eye tracker, in-store communication
Procedia PDF Downloads 393