Search results for: Julia Sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1328

Search results for: Julia Sets

218 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence

Authors: Garry Gorman, Nigel McKelvey, James Connolly

Abstract:

This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.

Keywords: computer science education, artificial intelligence, growth mindset, pedagogy

Procedia PDF Downloads 66
217 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 58
216 Changing Employment Relations Practices in Hong Kong: Cases of Two Multinational Retail Banks since 1997

Authors: Teresa Shuk-Ching Poon

Abstract:

This paper sets out to examine the changing employment relations practices in Hong Kong’s retail banking sector over a period of more than 10 years. The major objective of the research is to examine whether and to what extent local institutional influences have overshadowed global market forces in shaping strategic management decisions and employment relations practices in Hong Kong, with a view to drawing implications to comparative employment relations studies. Examining the changing pattern of employment relations, this paper finds the industrial relations strategic choice model (Kochan, McKersie and Cappelli, 1984) appropriate to use as a framework for the study. Four broad aspects of employment relations are examined, including work organisation and job design; staffing and labour adjustment; performance appraisal, compensation and employee development; and labour unions and employment relations. Changes in the employment relations practices in two multinational retail banks operated in Hong Kong are examined in detail. The retail banking sector in Hong Kong is chosen as a case to examine as it is a highly competitive segment in the financial service industry very much susceptible to global market influences. This is well illustrated by the fact that Hong Kong was hit hard by both the Asian and the Global Financial Crises. This sector is also subject to increasing institutional influences, especially after the return of Hong Kong’s sovereignty to the People’s Republic of China (PRC) since 1997. The case study method is used as it is a suitable research design able to capture the complex institutional and environmental context which is the subject-matter to be examined in the paper. The paper concludes that operation of the retail banks in Hong Kong has been subject to both institutional and global market changes at different points in time. Information obtained from the two cases examined tends to support the conclusion that the relative significance of institutional as against global market factors in influencing retail banks’ operation and their employment relations practices is depended very much on the time in which these influences emerged and the scale and intensity of these influences. This case study highlights the importance of placing comparative employment relations studies within a context where employment relations practices in different countries or different regions/cities within the same country could be examined and compared over a longer period of time to make the comparison more meaningful.

Keywords: employment relations, institutional influences, global market forces, strategic management decisions, retail banks, Hong Kong

Procedia PDF Downloads 376
215 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 94
214 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 224
213 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 414
212 Position of the Constitutional Court of the Russian Federation on the Matter of Restricting Constitutional Rights of Citizens Concerning Banking Secrecy

Authors: A. V. Shashkova

Abstract:

The aim of the present article is to analyze the position of the Constitutional Court of the Russian Federation on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The methodological ground of the present Article represents the dialectic scientific method of the socio-political, legal and organizational processes with the principles of development, integrity, and consistency, etc. The consistency analysis method is used while researching the object of the analysis. Some public-private research methods are also used: the formally-logical method or the comparative legal method, are used to compare the understanding of the ‘secrecy’ concept. The aim of the present article is to find the root of the problem and to give recommendations for the solution of the problem. The result of the present research is the author’s conclusion on the necessity of the political will to improve Russian legislation with the aim of compliance with the provisions of the Constitution. It is also necessary to establish a clear balance between the constitutional rights of the individual and the limit of these rights when carrying out various control activities by public authorities. Attempts by the banks to "overdo" an anti-money laundering law under threat of severe sanctions by the regulators actually led to failures in the execution of normal economic activity. Therefore, individuals face huge problems with payments on the basis of clearing, in addition to problems with cash withdrawals. The Bank of Russia sets requirements for banks to execute Federal Law No. 115-FZ too high. It is high place to attract political will here. As well, recent changes in Russian legislation, e.g. allowing banks to refuse opening of accounts unilaterally, simplified banking activities in the country. The article focuses on different theoretical approaches towards the concept of “secrecy”. The author gives an overview of the practices of Spain, Switzerland and the United States of America on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The Constitutional Court of the Russian Federation basing on the Constitution of the Russian Federation has its special understanding of the issue, which should be supported by further legislative development in the Russian Federation.

Keywords: constitutional court, restriction of constitutional rights, bank secrecy, control measures, money laundering, financial control, banking information

Procedia PDF Downloads 162
211 The Regulation of Alternative Dispute Resolution Institutions in Consumer Redress and Enforcement: A South African Perspective

Authors: Jacolien Barnard, Corlia Van Heerden

Abstract:

Effective and accessible consensual dispute resolution and in particular alternative dispute resolution, are central to consumer protection legislation. In this regard, the Consumer Protection Act 68 of 2008 (CPA) of South Africa is no exception. Due to the nature of consumer disputes, alternative dispute resolution (in theory) is an effective vehicle for the adjudication of disputes in a timely manner avoiding overburdening of the courts. The CPA sets down as one of its core purposes the provision of ‘an accessible, consistent, harmonized, effective and efficient system of redress for consumers’ (section 3(1)(h) of the CPA). Section 69 of the Act provides for the enforcement of consumer rights and provides for the National Consumer Commission to be the Central Authority which streamlines, adjudicates and channels disputes to the appropriate forums which include Alternative Dispute Resolution Agents (ADR-agents). The purpose of this paper is to analyze the regulation of these enforcement and redress mechanisms with particular focus on the Central Authority as well as the ADR-agents and their crucial role in successful and efficient adjudication of disputes in South Africa. The South African position will be discussed comparatively with the European Union (EU) position. In this regard, the European Union (EU) Directive on Alternative Dispute Resolution for Consumer Disputes (2013/11/EU) will be discussed (The ADR Directive). The aim of the ADR Directive is to solve contractual disputes between consumers and traders (suppliers or businesses) regardless of whether the agreement was concluded offline or online or whether or not the trader is situated in another member state (Recitals 4-6). The ADR Directive provides for a set of quality requirements that an ADR body or entity tasked with resolving consumer disputes should adhere to in member states which include regulatory mechanisms for control. Transparency, effectiveness, fairness, liberty and legality are all requirements for a successful ADR body and discussed within this chapter III of the Directive. Chapters III and IV govern the importance of information and co-operation. This includes information between ADR bodies and the European Commission (EC) but also between ADR bodies or entities and national authorities enforcing legal acts on consumer protection and traders. (In South Africa the National Consumer Tribunal, Provincial Consumer Protectors and Industry ombuds come to mind). All of which have a responsibility to keep consumers informed. Ultimately the papers aims to provide recommendations as to the successfulness of the current South African position in light of the comparative position in Europe and the highlight the importance of proper regulation of these redress and enforcement institutions.

Keywords: alternative dispute resolution, consumer protection law, enforcement, redress

Procedia PDF Downloads 195
210 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 150
209 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 144
208 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 170
207 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels

Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi

Abstract:

Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.

Keywords: glutamine, resistance traning, immuglobulins, cortisol

Procedia PDF Downloads 453
206 Study of Open Spaces in Urban Residential Clusters in India

Authors: Renuka G. Oka

Abstract:

From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.

Keywords: open spaces, physical and social determinants, residential clusters, use patterns

Procedia PDF Downloads 119
205 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 47
204 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 318
203 Antenatal Monitoring of Pre-Eclampsia in a Low Resource Setting

Authors: Alina Rahim, Joanne Moffatt, Jessica Taylor, Joseph Hartland, Tamer Abdelrazik

Abstract:

Background: In 2011, 15% of maternal deaths in Uganda were due to hypertensive disorders (pre-eclampsia and eclampsia). The majority of these deaths are avoidable with optimum antenatal care. The aim of the study was to evaluate how antenatal monitoring of pre-eclampsia was carried out in a low resource setting and to identify barriers to best practice as recommended by the World Health Organisation (WHO) as part of a 4th year medical student External Student Selected component field trip. Method: Women admitted to hospital with pre-eclampsia in rural Uganda (Villa Maria and Kitovu Hospitals) over a year-long period were identified using the maternity register and antenatal record book. It was not possible to obtain notes for all cases identified on the maternity register. Therefore a total of thirty sets of notes were reviewed. The management was recorded and compared to Ugandan National Guidelines and WHO recommendations. Additional qualitative information on routine practice was established by interviewing staff members from the obstetric and midwifery teams. Results: From the records available, all patients in this sample were managed according to WHO recommendations during labour. The rate of Caesarean section as a mode of delivery was noted to be high in this group of patients; 56% at Villa Maria and 46% at Kitovu. Antenatally two WHO recommendations were not routinely met: aspirin prophylaxis and calcium supplementation. This was due to lack of resources, and lack of attendance at antenatal clinic leading to poor detection of high-risk patients. Medical management of pre-eclampsia varied between individual patients, overall 93.3% complied with Ugandan national guidelines. Two patients were treated with diuretics, which is against WHO guidance. Discussion: Antenatal monitoring of pre-eclampsia is important in reducing severe morbidity, long-term disability and mortality amongst mothers and their babies 2 . Poor attendance at antenatal clinic is a barrier to healthcare in low-income countries. Increasing awareness of the importance of these visits for women should be encouraged. The majority of cases reviewed in this sample of women were treated according to Ugandan National Guidelines. It is recommended to commence the use of aspirin prophylaxis for women at high-risk of developing pre-eclampsia and the creation of detailed guidelines for Uganda which would allow for standardisation of care county-wide.

Keywords: antenatal monitoring, low resource setting, pre-eclampsia, Uganda

Procedia PDF Downloads 204
202 Collaboration versus Cooperation: Grassroots Activism in Divided Cities and Communication Networks

Authors: R. Barbour

Abstract:

Peace-building organisations act as a network of information for communities. Through fieldwork, it was highlighted that grassroots organisations and activists may cooperate with each other in their actions of peace-building; however, they would not collaborate. Within two divided societies; Nicosia in Cyprus and Jerusalem in Israel, there is a distinction made by organisations and activists with regards to activities being more ‘co-operative’ than ‘collaborative’. This theme became apparent when having informal conversations and semi-structured interviews with various members of the activist communities. This idea needs further exploration as these distinctions could impact upon the efficiency of peacebuilding activities within divided societies. Civil societies within divided landscapes, both physically and socially, play an important role in conflict resolution. How organisations and activists interact with each other has the possibility to be very influential with regards to peacebuilding activities. Working together sets a positive example for divided communities. Cooperation may be considered a primary level of interaction between CSOs. Therefore, at the beginning of a working relationship, organisations cooperate over basic agendas, parallel power structures and focus, which led to the same objective. Over time, in some instances, due to varying factors such as funding, more trust and understanding within the relationship, it could be seen that processes progressed to more collaborative ways. It is evident to see that NGOs and activist groups are highly independent and focus on their own agendas before coming together over shared issues. At this time, there appears to be more collaboration in Nicosia among CSOs and activists than Jerusalem. The aims and objectives of agendas also influence how organisations work together. In recent years, Nicosia, and Cyprus in general, have perhaps changed their focus from peace-building initiatives to more environmental issues which have become new-age reconciliation topics. Civil society does not automatically indicate like-minded organisations however solidarity within social groups can create ties that bring people and resources together. In unequal societies, such as those in Nicosia and Jerusalem, it is these ties that cut across groups and are essential for social cohesion. Societies are a collection of social groups; individuals who have come together over common beliefs. These groups in turn shape the identities and determine the values and structures within societies. At many different levels and stages, social groups work together through cooperation and collaboration. These structures in turn have the capabilities to open up networks to less powerful or excluded groups, with the aim to produce social cohesion which may contribute social stability and economic welfare over any extended period.

Keywords: collaboration, cooperation, grassroots activism, networks of communication

Procedia PDF Downloads 131
201 The Essential but Uncertain Role of the Vietnamese Association of Cities of Vietnam in Promoting Community-Based Housing Upgrading

Authors: T. Nguyen, H. Rennie, S. Vallance, M. Mackay

Abstract:

Municipal Associations, also called Unions, Leagues or Federations of municipalities have been established worldwide to represent the interests and needs of urban governments in the face of increasing urban issues. In 2008, the Association of Cities of Vietnam (ACVN) joined the Asian Coalition of Community Action Program (ACCA program) and introduced the community-based upgrading approach to help Vietnamese cities to address urban upgrading issues. While this community-based upgrading approach has only been implemented in a small number of Vietnamese cities and its replication has faced certain challenges, it is worthy to explore insights on how the Association of cities of Vietnam played its role in implementing some reportedly successful projects. This paper responds to this inquiry and presents results extracted from the author’s PhD study that sets out with a general objective to critically examine how social capital dimensions (i.e., bonding, bridging and linking) were formed, mobilized and maintained in a local collective and community-based upgrading process. Methodologically, the study utilized the given general categorization of bonding, bridging and linking capitals to explore and confirm how social capital operated in the real context of a community-based upgrading process, particularly in the context of Vietnam. To do this, the study conducted two exploratory and qualitative case studies of housing projects in Friendship neighbourhood (Vinh city) and Binh Dong neighbourhood (Tan An city). This paper presents the findings of the Friendship neighbourhood case study, focusing on the role of the Vietnamese municipal association in forming, mobilizing and maintaining bonding, bridging and linking capital for a community-based upgrading process. The findings highlight the essential but uncertain role of ACVN - the organization that has a hybrid legitimacy status - in such a process. The results improve our understanding both practically and theoretically. Practically, the results offer insights into the performance of a municipal association operating in a transitioning socio-political context of Vietnam. Theoretically, the paper questions the necessity of categorizing social capital dimensions (i.e., bonding, bridging and linking) by suggesting a holistic approach of looking at social capital for urban governance issues within the Vietnamese context and perhaps elsewhere.

Keywords: bonding capital, bridging capital, municipal association, linking capital, social capital, housing upgrading

Procedia PDF Downloads 123
200 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 239
199 A pilot Study of Umbilical Cord Mini-Clamp

Authors: Seng Sing Tan

Abstract:

Clamping of the umbilical cord after birth is widely practiced as a part of labor management. Further improvements were proposed to produce a smaller, lighter and more comfortable clamp while still maintaining current standards of clamping. A detachable holder was also developed to facilitate the clamping process. This pilot study on the efficacy of the mini-clamp was conducted to evaluate a tightness of the seal and a firm grip of the clamp on the umbilical cord. The study was carried out at National University Hospital, using 5 sets of placental cord. 18 samples of approximate 10 cm each were harvested. The test results showed that the mini-clamp was able to stop the flow through the cord after clamping without rupturing the cord. All slip tests passed with a load of 0.2 kg. In the pressure testing, 30kPa of saline was exerted into the umbilical veins. Although there was no physical sign of fluid leaking through the end secured by the mini-clamp, the results showed the pressure was not able to sustain the pressure set during the tests. 12 out of the 18 test samples have more than 7% of pressure drop in 30 seconds. During the pressure leak test, it was observed on several samples that when pressurized, small droplets of saline were growing on the outer surface of the cord lining membrane. It was thus hypothesized that the pressure drop was likely caused by the perfusion of the injected saline through the Wharton’s jelly and the cord lining membrane. The average pressure in the umbilical vein is roughly 2.67kPa (20 mmHg), less than 10% of 30kPa (~225mmHg), set for the pressure testing. As such, the pressure set could be over-specified, leading to undesirable outcomes. The development of the mini-clamp was an attempt to increase the comfort of newly born babies while maintaining the usability and efficacy of hospital grade umbilical cord clamp. The pressure leak in this study would be unfair to fully attribute it to the design and efficacy of the mini-clamp. Considering the unexpected leakage of saline through the umbilical membrane due to over-specified pressure exerted on the umbilical veins, improvements can definitely be made to the existing experimental setup to obtain a more accurate and conclusive outcome. If proven conclusive and effective, the mini-clamp with a detachable holder could be a smaller and potentially cheaper alternative to existing umbilical cord clamps. In addition, future clinical trials could be conducted to determine the user-friendliness of the mini-clamp and evaluate its practicality in the clinical setting by labor ward clinicians. A further potential improvement could be proposed on the sustainability factor of the mini-clamp. A biodegradable clamp would revolutionise the industry in this increasingly environmentally sustainability world.

Keywords: leak test, mini-clamp, slip test, umbilical cord

Procedia PDF Downloads 108
198 The Position of Islamic Jurisprudence in UAE Private Law: Analytical Study

Authors: Iyad Jadalhaq, Mohammed El Hadi El Maknouzi

Abstract:

The place of Islamic law in the legal system of the UAE is best understood by introducing a differentiation between its role as a formal source of law and its influence as a material source of law. What this differentiation helps clarify is that the corpus of Islamic law constitutes a much deeper influence on adjudication, law-making and the legal profession in the UAE, than it might appear at first sight, by considering its formal position in the division of labor between courts, or legislative lists of sources of law. This paper aims to examine the role of Shariah in the UAE private law system by determining the comprehensiveness of Sharia in the legal system as a whole, and not in a limited way related to it as a source of law according to Article 1 of the Civil Transactions Law. Turning to the role of the Shariah as a formal source of law, it is useful to start from Article 1 of the UAE Civil Code. This provision lays out the formal hierarchy of sources of UAE private law, these being legislation, Islamic law, and custom. Hence, when deciding a civil dispute, a judge should first refer to positive legislation in force in the UAE. Lacking the rule to cover the case before him/her, the judge ought then to refer directly to Islamic law. If the matter lacks regulation in Islamic law, only then may the judge appeal to custom. Accordingly, in connection to civil transactions, Shariah is presented here, formally, as the second source of law. Still, Shariah law addresses many other issues beyond civil transactions, including matters of morals, worship, and belief. However, in Article 1 of the UAE Civil Code, the reference to Islamic law ought to be understood as limited to the rules it lays out for civil transactions. There are four main sets of courts in the judicial systems of the UAE, whose competence is based on whether a dispute touches upon civil and commercial transactions, criminal offenses, personal statuses, or labor relations. This sectorial and multi-tiered organization of courts as a whole constitutes an institutional development compatible with the long-standing affirmation in the Shariah of the legitimacy of the judiciary. Indeed, Islamic law authorizes the governing authorities to organize the judiciary, including by allocating specific types of cases to particular kinds of judges depending on the value of the case, or by assigning judges to a specific place in which they are to exercise their jurisdictional function. In view of this, the contemporary organization of courts in the UAE can be regarded as an organic adaptation, aligned with Shariah rules on the assignment of jurisdictional authority, to the growing complexity of modern society. Therefore, we can conclude to the comprehensive role of Shariah in the entire legal system of the United Arab Emirates, including legislation, a judicial system, institutional, and administrative work.

Keywords: Islamic jurisprudence, Shariah, UAE civil code, UAE private law

Procedia PDF Downloads 99
197 An Application of Quantile Regression to Large-Scale Disaster Research

Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede

Abstract:

Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.

Keywords: disaster workers, post traumatic stress, PTSD, quantile regression

Procedia PDF Downloads 262
196 Increases in Serum Erythropoietin Hormone in Recreational Breath-Hold Divers Following a Series of Repeated Apnoeas: Apnoea beyond Freediving

Authors: Antonis Elia, Theo Loizou, Gladys Onambele-Pearson, Matthew Barlow, Georgina Stebbings

Abstract:

Hypoxic conditions have been reported to enhance red blood cell production in both acclimatised low-landers and altitude adapted populations. This process is mediated by the erythropoietin hormone, which is released predominantly by the hypoxic kidney. A higher haemoglobin concentration was previously reported in elite breath-hold divers when compared to elite-skiers and untrained individuals. Therefore, the present study aimed to investigate whether apnoea induced hypoxia could induce a significant increase in serum erythropoietin concentration in recreational breath-hold divers which would provide an explanation to the higher haemoglobin levels observed in elite breath-hold divers. Identifying whether apnoea induced hypoxia induces a significant increase in serum erythropoietin might suggest that apnoea can be used as an alternative acclimatisation method to high altitude exposure. Seven healthy, recreational male breath-hold divers performed two sets of five 180 second breath-holds with a ten-minute supine rest between each set and a two-minute seated rest between each apnoea. During each breath-hold, participant’s heart rate and peripheral oxygen saturation levels were recorded every subsequent 10 seconds until the end of the 180 second breath-hold. After each 180 second breath-hold a capillary blood sample was collected from the finger to identify circulating haemoglobin levels. Following completion of the apnoeic protocol, three blood samples were collected at 30, 90 and 180 minutes to measure circulating erythropoietin levels. A significant interaction between erythropoietin and time was observed (F(3,18)= 4.72, p < 0.001), with significant increases in erythropoietin evident at 30 (t(6)= -5.035, p < 0.0590 (t(6)= -6.162, p < 0.05) and 180 (t(6)= - 7.232, p < 0.001) minutes post the last apnoea when compared to baseline. Corresponding average increases when compared to baseline were 16% at 30, 23% at 90 and 40% at 180 minutes post the last apnoea. A significant interaction between haemoglobin and time was observed (F(78,84)= 20.814, p < 0.001), with significant increases in haemoglobin evident at the fifth (t(29)= -1.124, p < 0.001), ninth (t(29)= -1.357, p < 0.001) and tenth (t(29)= -1.211, p < 0.05) apnoeas when compared to baseline. A significant interaction between peripheral oxygen saturation and time was observed (F(10,60)= 408.23, p < 0.001). The present study demonstrates that a series of ten 180 second breath-holds is sufficient to induce a significant increase in the circulating erythropoietin concentration of recreational breath hold divers. These observations may suggest that apnoea induced hypoxia may be used as an alternative acclimatisation method to high altitude exposure.

Keywords: apnoea, breath-holding, diving reflex, erythropoietin, haemoglobin

Procedia PDF Downloads 159
195 Euthanasia Reconsidered: Voting and Multicriteria Decision-Making in Medical Ethics

Authors: J. Hakula

Abstract:

Discussion on euthanasia is a continuous process. Euthanasia is defined as 'deliberately ending a patient's life by administering life-ending drugs at the patient's explicit request'. With few exceptions, worldwide in most countries human societies have not been able to agree on some fundamental issues concerning ultimate decisions of life and death. Outranking methods in voting oriented social choice theory and multicriteria decision-making (MCDM) can be applied to issues in medical ethics. There is a wide range of voting methods, and using different methods the same group of voters can end up with different outcomes. In the MCDM context, decision alternatives can be substituted for candidates, and criteria for voters. The view chosen here is that of a single decision-maker. Initially, three alternatives and three criteria are chosen. Pairwise and basic positional voting rules - plurality, anti-plurality and the Borda count - are applied. In the MCDM solution, criteria are put weights by giving them the more 'votes'; the more important the decision-maker ranks them. A hypothetical example on evaluating properties of euthanasia consists of three alternatives A, B, and C, which are ranked according to three criteria - the patient’s willingness to cooperate, general action orientation (active/passive), and cost-effectiveness - the criteria having weights 7, 5, and 4, respectively. Using the plurality rule and the weights given to criteria, A is the best alternative, B and C thereafter. In pairwise comparisons, both B and C defeat A with weight scores 7 to 9. On the other hand, B is defeated by C with weights 11 to 5. Thus, C (i.e. the so-called Condorcet winner) defeats both A and B. The best alternative using the plurality principle is not necessarily the best in the pairwise sense, the conflict remaining unsolved with or without additional weights. Positional rules are sensitive to variations in alternative sets. In the example above, the plurality rule gives the rank ABC. If we leave out C, the plurality ranking between A and B results in BA. Withdrawing B or A the ranking is CA and CB, respectively. In pairwise comparisons an analogous problem emerges when the number of criteria is varied. Cyclic preferences may lead to a total tie, and no (rational) choice between the alternatives can be made. In conclusion, the choice of the best commitment to re-evaluate euthanasia, with criteria left unchanged, depends entirely on the evaluation method used. The right strategies matter, too. Future studies might concern the problem of an abstention - a situation where voters do not vote - and still their best candidate may win. Or vice versa, actively giving the ballot to their first rank choice might lead to a total loss. In MCDM terms, a decision might occur where some central criteria are not actively involved in the best choice made.

Keywords: medical ethics, euthanasia, voting methods, multicriteria decision-making

Procedia PDF Downloads 128
194 Flow Links Curiosity and Creativity: The Mediating Role of Flow

Authors: Nicola S. Schutte, John M. Malouff

Abstract:

Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.

Keywords: creativity, curiosity, flow, motivation

Procedia PDF Downloads 161
193 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 204
192 Alkali Activation of Fly Ash, Metakaolin and Slag Blends: Fresh and Hardened Properties

Authors: Weiliang Gong, Lissa Gomes, Lucile Raymond, Hui Xu, Werner Lutze, Ian L. Pegg

Abstract:

Alkali-activated materials, particularly geopolymers, have attracted much interest in academia. Commercial applications are on the rise, as well. Geopolymers are produced typically by a reaction of one or two aluminosilicates with an alkaline solution at room temperature. Fly ash is an important aluminosilicate source. However, using low-Ca fly ash, the byproduct of burning hard or black coal reacts and sets slowly at room temperature. The development of mechanical durability, e.g., compressive strength, is slow as well. The use of fly ashes with relatively high contents ( > 6%) of unburned carbon, i.e., high loss on ignition (LOI), is particularly disadvantageous as well. This paper will show to what extent these impediments can be mitigated by mixing the fly ash with one or two more aluminosilicate sources. The fly ash used here is generated at the Orlando power plant (Florida, USA). It is low in Ca ( < 1.5% CaO) and has a high LOI of > 6%. The additional aluminosilicate sources are metakaolin and blast furnace slag. Binary fly ash-metakaolin and ternary fly ash-metakaolin-slag geopolymers were prepared. Properties of geopolymer pastes before and after setting have been measured. Fresh mixtures of aluminosilicates with an alkaline solution were studied by Vicat needle penetration, rheology, and isothermal calorimetry up to initial setting and beyond. The hardened geopolymers were investigated by SEM/EDS and the compressive strength was measured. Initial setting (fluid to solid transition) was indicated by a rapid increase in yield stress and plastic viscosity. The rheological times of setting were always smaller than the Vicat times of setting. Both times of setting decreased with increasing replacement of fly ash with blast furnace slag in a ternary fly ash-metakaolin-slag geopolymer system. As expected, setting with only Orlando fly ash was the slowest. Replacing 20% fly ash with metakaolin shortened the set time. Replacing increasing fractions of fly ash in the binary system by blast furnace slag (up to 30%) shortened the time of setting even further. The 28-day compressive strength increased drastically from < 20 MPa to 90 MPa. The most interesting finding relates to the calorimetric measurements. The use of two or three aluminosilicates generated significantly more heat (20 to 65%) than the calculated from the weighted sum of the individual aluminosilicates. This synergetic heat contributes or may be responsible for most of the increase of compressive strength of our binary and ternary geopolymers. The synergetic heat effect may be also related to increased incorporation of calcium in sodium aluminosilicate hydrate to form a hybrid (N,C)A-S-H) gel. The time of setting will be correlated with heat release and maximum heat flow.

Keywords: alkali-activated materials, binary and ternary geopolymers, blends of fly ash, metakaolin and blast furnace slag, rheology, synergetic heats

Procedia PDF Downloads 96
191 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 151
190 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 391
189 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 105