Search results for: measures for improvement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7685

Search results for: measures for improvement

965 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 182
964 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 648
963 The Role of Community Beliefs and Practices on the Spread of Ebola in Uganda, September 2022

Authors: Helen Nelly Naiga, Jane Frances Zalwango, Saudah N. Kizito, Brian Agaba, Brenda N Simbwa, Maria Goretti Zalwango, Richard Migisha, Benon Kwesiga, Daniel Kadobera, Alex Ario Riolexus, Sarah Paige, Julie R. Harris

Abstract:

Background: Traditional community beliefs and practices can facilitate the spread of Ebola virus during outbreaks. On September 20, 2022, Uganda declared a Sudan Virus Disease (SVD) outbreak after a case was confirmed in Mubende District. During September–November 2022, the outbreak spread to eight additional districts. We investigated the role of community beliefs and practices in the spread of SUDV in Uganda in 2022. Methods: A qualitative study was conducted in Mubende, Kassanda, and Kyegegwa districts in February 2023. We conducted nine focus group discussions (FGDs) and six key informant interviews (KIIs). FGDs included SVD survivors, household members of SVD patients, traditional healers, religious leaders, and community leaders. Key informants included community, political, and religious leaders, traditional healers, and health workers. We asked about community beliefs and practices to understand if and how they contributed to the spread of SUDV. Interviews were recorded, translated, transcribed, and analyzed thematically. Results: Frequently-reported themes included beliefs that the community deaths, later found to be due to SVD, were the result of witchcraft or poisoning. Key informants reported that SVD patients frequently first consulted traditional healers or spiritual leaders before seeking formal healthcare, and noted that traditional healers treated patients with signs and symptoms of SVD without protective measures. Additional themes included religious leaders conducting laying-on-of-hands prayers for SVD patients and symptomatic contacts, SVD patients and their symptomatic contacts hiding in friends’ homes, and exhumation of SVD patients originally buried in safe and dignified burials, to enable traditional burials. Conclusion: Multiple community beliefs and practices likely promoted SVD outbreak spread during the 2022 outbreak in Uganda. Engaging traditional and spiritual healers early during similar outbreaks through risk communication and community engagement efforts could facilitate outbreak control. Targeted community messaging, including clear biological explanations for clusters of deaths and information on the dangers of exhuming bodies of SVD patients, could similarly facilitate improved control in future outbreaks in Uganda.

Keywords: Ebola, Sudan virus, outbreak, beliefs, traditional

Procedia PDF Downloads 55
962 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 192
961 Different Processing Methods to Obtain a Carbon Composite Element for Cycling

Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso

Abstract:

The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.

Keywords: HP-RTM, carbon composites, cycling, FEM

Procedia PDF Downloads 134
960 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
959 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 101
958 Modeling Curriculum for High School Students to Learn about Electric Circuits

Authors: Meng-Fei Cheng, Wei-Lun Chen, Han-Chang Ma, Chi-Che Tsai

Abstract:

Recent K–12 Taiwan Science Education Curriculum Guideline emphasize the essential role of modeling curriculum in science learning; however, few modeling curricula have been designed and adopted in current science teaching. Therefore, this study aims to develop modeling curriculum on electric circuits to investigate any learning difficulties students have with modeling curriculum and further enhance modeling teaching. This study was conducted with 44 10th-grade students in Central Taiwan. Data collection included a students’ understanding of models in science (SUMS) survey that explored the students' epistemology of scientific models and modeling and a complex circuit problem to investigate the students’ modeling abilities. Data analysis included the following: (1) Paired sample t-tests were used to examine the improvement of students’ modeling abilities and conceptual understanding before and after the curriculum was taught. (2) Paired sample t-tests were also utilized to determine the students’ modeling abilities before and after the modeling activities, and a Pearson correlation was used to understand the relationship between students’ modeling abilities during the activities and on the posttest. (3) ANOVA analysis was used during different stages of the modeling curriculum to investigate the differences between the students’ who developed microscopic models and macroscopic models after the modeling curriculum was taught. (4) Independent sample t-tests were employed to determine whether the students who changed their models had significantly different understandings of scientific models than the students who did not change their models. The results revealed the following: (1) After the modeling curriculum was taught, the students had made significant progress in both their understanding of the science concept and their modeling abilities. In terms of science concepts, this modeling curriculum helped the students overcome the misconception that electric currents reduce after flowing through light bulbs. In terms of modeling abilities, this modeling curriculum helped students employ macroscopic or microscopic models to explain their observed phenomena. (2) Encouraging the students to explain scientific phenomena in different context prompts during the modeling process allowed them to convert their models to microscopic models, but it did not help them continuously employ microscopic models throughout the whole curriculum. The students finally consistently employed microscopic models when they had help visualizing the microscopic models. (3) During the modeling process, the students who revised their own models better understood that models can be changed than the students who did not revise their own models. Also, the students who revised their models to explain different scientific phenomena tended to regard models as explanatory tools. In short, this study explored different strategies to facilitate students’ modeling processes as well as their difficulties with the modeling process. The findings can be used to design and teach modeling curricula and help students enhance their modeling abilities.

Keywords: electric circuits, modeling curriculum, science learning, scientific model

Procedia PDF Downloads 460
957 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 126
956 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
955 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan

Authors: Narjis Kahtoon

Abstract:

The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.

Keywords: gender wage differentials, decomposition, garment, cultural

Procedia PDF Downloads 210
954 Validating the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire for Use in Sri Lanka

Authors: Shyamani Hettiarachchi, Gopi Kitnasamy

Abstract:

Background: The potentially high level of physical need and dependency experienced by children with cerebral palsy could affect the quality of life (QOL) of the child, the caregiver and his/her family. Poor QOL in children with cerebral palsy is associated with the parent-child relationship, limited opportunities for social participation, limited access to healthcare services, psychological well-being and the child's physical functioning. Given that children experiencing disabilities have little access to remedial support with an inequitable service across districts in Sri Lanka, and given the impact of culture and societal stigma, there may be differing viewpoints across respondents. Objectives: The aim of this study was to evaluate the psychometric properties of the Tamil version of the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire. Design: An instrument development and validation study. Methods: Forward and backward translations of the CPQOL-Child were undertaken by a team comprised of a physiotherapist, speech and language therapist and two linguists for the primary caregiver form and the child self-report form. As part of a pilot phase, the Tamil version of the CPQOL was completed by 45 primary caregivers with children with cerebral palsy and 15 children with cerebral palsy (GMFCS level 3-4). In addition, the primary caregivers commented on the process of filling in the questionnaire. The psychometric properties of test-retest reliability, internal consistency and construct validity were undertaken. Results: The test-retest reliability and internal consistency were high. A significant association (p < 0.001) was found between limited motor skills and poor QOL. The Cronbach's alpha for the whole questionnaire was at 0.95.Similarities and divergences were found between the two groups of respondents. The child respondents identified limited motor skills as associated with physical well-being and autonomy. Akin to this, the primary caregivers associated the severity of motor function with limitations of physical well-being and autonomy. The trend observed was that QOL was not related to the level of impairment but connected to environmental factors by the child respondents. In addition to this, the main concern among primary caregivers about the child's future and on the child's lack of independence was not fully captured by the QOL questionnaire employed. Conclusions: Although the initial results of the CPQOL questionnaire show high test-retest reliability and internal consistency of the instrument, it does not fully reflect the socio-cultural realities and primary concerns of the caregivers. The current findings highlight the need to take child and caregiver perceptions of QOL into account in clinical practice and research. It strongly indicates the need for culture-specific measures of QOL.

Keywords: cerebral palsy, CPQOL, culture, quality of life

Procedia PDF Downloads 344
953 A Comparative Human Rights Analysis of the Securitization of Migration in the Fight against Terrorism in Europe: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

The last quarter of the twentieth century was characterized by the emergence of a new kind of terrorism: religiously-inspired terrorism. Islam finds itself at the heart of this new wave, considering the number of international attacks committed by Islamic-inspired perpetrators. With religiously inspired terrorism as an operating framework, governments increasingly rely on immigration law to counter such terrorism. Immigration law seems particularly useful because its core task consists of keeping ‘unwanted’ people out. Islamic terrorists more often than not have an immigrant background and will be subject to immigration law. As a result, immigration law becomes more and more ‘securitized’. The European migration crisis has reinforced this trend. The research explores the human rights consequences of immigration law’s securitization in Europe. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues but respond very differently to them. The United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand also introduced restrictions to its immigration policy but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This contribution evaluates the situation in Belgium. Through a series of legislative changes, the Belgian parliament (i) greatly expanded the possibilities of expelling foreign nationals for (vaguely defined) reasons of ‘national security’; (ii) abolished almost all procedural protection associated with this decision (iii) broadened, as an extra security measure, the possibility of depriving individuals condemned of terrorism of their Belgian nationality. Measures such as these are obviously problematic from a human rights perspective; they jeopardize the principle of legality, the presumption of innocence, the right to protection of private and family life and the prohibition on torture. Moreover, this contribution also raises questions about the efficacy of immigration law’s suitability as a counterterrorism instrument. Is it a legitimate step, considering the type of terrorism we face today? Or, is it merely a strategic move, considering the broader maneuvering space immigration law offers and the lack of political resistance governments receive when infringing the rights of foreigners? Even more so, figures demonstrate that today’s terrorist threat does not necessarily stem from outside our borders. Does immigration law then still absorb - if it has ever done so (completely) - the threat? The study’s goal is to critically assess, from a human rights perspective, the counterterrorism strategies European governments have adopted. As most governments adopt a variation of the same core concepts, the study’s findings will hold true even beyond the four countries addressed.

Keywords: Belgium, counterterrorism strategies, human rights, immigration law

Procedia PDF Downloads 106
952 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 166
951 Factors Contributing to Adverse Maternal and Fetal Outcome in Patients with Eclampsia

Authors: T. Pradhan, P. Rijal, M. C. Regmi

Abstract:

Background: Eclampsia is a multisystem disorder that involves vital organs and failure of these may lead to deterioration of maternal condition and hypoxia and acidosis of fetus resulting in high maternal and perinatal mortality and morbidity. Thus, evaluation of the contributing factors for this condition and its complications leading to maternal deaths should be the priority. Formulating the plan and protocol to decrease these losses should be our goal. Aims and Objectives: To evaluate the risk factors associated with adverse maternal and fetal outcome in patients with eclampsia and to correlate the risk factors associated with maternal and fetal morbidity and mortality. Methods: All patients with eclampsia admitted in Department of Obstetrics and Gynecology, B. P. Koirala Institute of Health Sciences were enrolled after informed consent from February 2013 to February 2014. Questions as per per-forma were asked to patients, and attendants like Antenatal clinic visits, parity, number of episodes of seizures, duration from onset of seizure to magnesium sulfate and the patients were followed as per the hospital protocol, the mode of delivery, outcome of baby, post partum maternal condition like maternal Intensive Care Unit admission, neurological impairment and mortality were noted before discharge. Statistical analysis was done using Statistical Package for the Social Sciences (SPSS 11). Mean and percentage were calculated for demographic variables. Pearson’s correlation test and chi-square test were applied to find the relation between the risk factors and the outcomes. P value less than 0.05 was considered significant. Results: There were 10,000 antenatal deliveries during the study period. Fifty-two patients with eclampsia were admitted. All of the patients were unbooked for our institute. Thirty-nine patients were antepartum eclampsia. Thirty-one patients required mechanical ventilator support. Twenty-four patients were delivered by emergency c-section and 21 babies were Low Birth Weight and there were 9 stillbirths. There was one maternal mortality and 45 patients were discharged with improvement but 3 patients had neurological impairment. Mortality was significantly related with number of seizure episodes and time interval between seizure onset and administration of magnesium sulphate. Conclusion: Early detection and management of hypertensive complicating pregnancy during antenatal clinic check up. Early hospitalization and management with magnesium sulphate for eclampsia can help to minimize the maternal and fetal adverse outcomes.

Keywords: eclampsia, maternal mortality, perinatal mortality, risk factors

Procedia PDF Downloads 170
950 Endotracheal Intubation Self-Confidence: Report of a Realistic Simulation Training

Authors: Cleto J. Sauer Jr., Rita C. Sauer, Chaider G. Andrade, Doris F. Rabelo

Abstract:

Introduction: Endotracheal Intubation (ETI) is a procedure for clinical management of patients with severe clinical presentation of COVID-19 disease. Realistic simulation (RS) is an active learning methodology utilized for clinical skill's improvement. To improve ETI skills of public health network's physicians from Recôncavo da Bahia region in Brazil, during COVID-19 outbreak, RS training was planned and carried out. Training scenario included the Nasco Lifeform realistic simulator, and three actions were simulated: ETI procedure, sedative drugs management, and bougie guide utilization. Training intervention occurred between May and June 2020, as an interinstitutional cooperation between the Health's Department of Bahia State and the Federal University from Recôncavo da Bahia. Objective: The main objective is to report the effects on participants' self-confidence perception for ETI procedure after RS based training. Methods: This is a descriptive study, with secondary data extracted from questionnaires applied throughout RS training. Priority workplace, time from last intubation, and knowledge about bougie were reported on a preparticipation questionnaire. Additionally, participants completed pre- and post-training qualitative self-assessment (10-point Likert scale) regarding self-confidence perception in performing each of simulated actions. Distribution analysis for qualitative data was performed with Wilcoxon Signed Rank Test, and self-confidence increase analysis in frequency contingency tables with Fisher's Exact Test. Results: 36 physicians participated of training, 25 (69%) from primary care setting, 25 (69%) performed ETI over a year ago, and only 4 (11%) had previous knowledge about the bougie guide utilization. There was an increase in self-confidence medians for all three simulated actions. Medians (variation) for self-confidence before and after training, for each simulated action were as follows: ETI [5 (1-9) vs. 8 (6-10) (p < 0.0001)]; Sedative drug management [5 (1-9) vs. 8 (4-10) (p < 0.0001)]; Bougie guide utilization [2.5 (1-7) vs. 8 (4-10) (p < 0.0001)]. Among those who performed ETI over a year ago (n = 25), an increase in self-confidence greater than 3 points for ETI was reported by 23 vs. 2 physicians (p = 0.0002), and by 21 vs. 4 (p = 0.03) for sedative drugs management. Conclusions: RS training contributed to self-confidence increase in performing ETI. Among participants who performed ETI over a year, there was a significant association between RS training and increase of more than 3 points in self-confidence, both for ETI and sedative drug management. Training with RS methodology is suitable for ETI confidence enhancement during COVID-19 outbreak.

Keywords: confidence, COVID-19, endotracheal intubation, realistic simulation

Procedia PDF Downloads 141
949 Durham Region: How to Achieve Zero Waste in a Municipal Setting

Authors: Mirka Januszkiewicz

Abstract:

The Regional Municipality of Durham is the upper level of a two-tier municipal and regional structure comprised of eight lower-tier municipalities. With a population of 655,000 in both urban and rural settings, the Region is approximately 2,537 square kilometers neighboring the City of Toronto, Ontario Canada to the east. The Region has been focused on diverting waste from disposal since the development of its Long Term Waste Management Strategy Plan for 2000-2020. With a 54 percent solid waste diversion rate, the focus now is on achieving 70 percent diversion on the path to zero waste using local waste management options whenever feasible. The Region has an Integrated Waste Management System that consists of a weekly curbside collection of recyclable printed paper and packaging and source separated organics; a seasonal collection of leaf and yard waste; a bi-weekly collection of residual garbage; and twice annual collection of intact, sealed household batteries. The Region also maintains three Waste Management Facilities for residential drop-off of household hazardous waste, polystyrene, construction and demolition debris and electronics. Special collection events are scheduled in the spring, summer and fall months for reusable items, household hazardous waste, and electronics. The Region is in the final commissioning stages of an energy from the waste facility for residual waste disposal that will recover energy from non-recyclable wastes. This facility is state of the art and is equipped for installation of carbon capture technology in the future. Despite all of these diversion programs and efforts, there is still room for improvement. Recent residential waste studies revealed that over 50% of the residual waste placed at the curb that is destined for incineration could be recycled. To move towards a zero waste community, the Region is looking to more advanced technologies for extracting the maximum recycling value from residential waste. Plans are underway to develop a pre-sort facility to remove organics and recyclables from the residual waste stream, including the growing multi-residential sector. Organics would then be treated anaerobically to generate biogas and fertilizer products for beneficial use within the Region. This project could increase the Region’s diversion rate beyond 70 percent and enhance the Region’s climate change mitigation goals. Zero waste is an ambitious goal in a changing regulatory and economic environment. Decision makers must be willing to consider new and emerging technologies and embrace change to succeed.

Keywords: municipal waste, residential, waste diversion, zero waste

Procedia PDF Downloads 219
948 Transient Performance Evaluation and Control Measures for Oum Azza Pumping Station Case Study

Authors: Itissam Abuiziah

Abstract:

This work presents a case study of water-hammer analysis and control for the Oum Azza pumping station project in the coastal area of Rabat to Casablanca from the dam Sidi Mohamed Ben Abdellah (SMBA). This is a typical pumping system with a long penstock and is currently at design and executions stages. Since there is no ideal location for construction of protection devices, the protection devices were provisionally designed to protect the whole conveying pipeline. The simulation results for the transient conditions caused by a sudden pumping stopping without including any protection devices, show that there is a negative beyond 1300m to the station 5725m near the arrival of the reservoir, therefore; there is a need for the protection devices to protect the conveying pipeline. To achieve the goal behind the transient flow analysis which is to protect the conveying pipeline system, four scenarios had been investigated in this case study with two types of protecting devices (pressure relief valve and desurging tank with automatic air control). The four scenarios are conceders as with pressure relief valve, with pressure relief valve and a desurging tank with automatic air control, with pressure relief valve and tow desurging tanks with automatic air control and with pressure relief valve and three desurging tanks with automatic air control. The simulation result for the first scenario shows that overpressure corresponding to an instant pumping stopping is reduced from 263m to 240m, and the minimum hydraulic grad line for the length approximately from station 1300m to station 5725m is still below the pipeline profile which means that the pipe must be equipped with another a protective devices for smoothing depressions. The simulation results for the second scenario show that the minimum and maximum pressures envelopes are decreases especially in the depression phase but not effectively protects the conduct in this case study. The minimum pressure increased from -77.7m for the previous scenario to -65.9m for the current scenario. Therefore the pipeline is still requiring additional protective devices; another desurging tank with automatic air control is installed at station2575.84m. The simulation results for the third scenario show that the minimum and maximum pressures envelopes are decreases but not effectively protects the conduct in this case study since the depression is still exist and varies from -0.6m to– 12m. Therefore the pipeline is still requiring additional protective devices; another desurging tank with automatic air control is installed at station 5670.32 m. Examination of the envelope curves of the minimum pressuresresults for the fourth scenario, we noticed that the piezometric pressure along the pipe remains positive over the entire length of the pipe. We can, therefore, conclude that such scenario can provide effective protection for the pipeline.

Keywords: analysis methods, protection devices, transient flow, water hammer

Procedia PDF Downloads 189
947 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems

Authors: Moustafa Osman Mohammed

Abstract:

The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.

Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling

Procedia PDF Downloads 157
946 Examining the European Central Bank's Marginal Attention to Human Rights Concerns during the Eurozone Crisis through the Lens of Organizational Culture

Authors: Hila Levi

Abstract:

Respect for human rights is a fundamental element of the European Union's (EU) identity and law. Surprisingly, however, the protection of human rights has been significantly restricted in the austerity programs ordered by the International Monetary Fund (IMF), the European Central Bank (ECB) and the European Commission (EC) (often labeled 'the Troika') in return for financial aid to the crisis-hit countries. This paper focuses on the role of the ECB in the crisis management. While other international financial institutions, such as the IMF or the World Bank, may opt to neglect human rights obligations, one might expect a greater respect of human rights from the ECB, which is bound by the EU Charter of Fundamental Rights. However, this paper argues that ECB officials made no significant effort to protect human rights or strike an adequate balance between competing financial and human rights needs while coping with the crisis. ECB officials were preoccupied with the need to stabilize the economy and prevent a collapse of the Eurozone, and paid only marginal attention to human rights concerns in the design and implementation of Troikas' programs. This paper explores the role of Organizational Culture (OC) in explaining this marginalization. While International Relations (IR) research on Intergovernmental Organizations (IGOs) behavior has traditionally focused on external interests of powerful member states, and on national and economic considerations, this study focuses on particular institutions' internal factors and independent processes. OC characteristics have been identified in OC literature as an important determinant of organizational behavior. This paper suggests that cultural characteristics are also vital for the examination of IGOs, and particularly for understanding the ECB's behavior during the crisis. In order to assess the OC of the ECB and the impact it had on its policies and decisions during the Eurozone crisis, the paper uses the results of numerous qualitative interviews conducted with high-ranking officials and staff members of the ECB involved in the crisis management. It further reviews primary sources of the ECB (such as ECB statutes, and the Memoranda of Understanding signed between the crisis countries and the Troika), and secondary sources (such as the report of the UN High Commissioner for Human Rights on Austerity measures and economic, social, and cultural rights). It thus analyzes the interaction between the ECBs culture and the almost complete absence of human rights considerations in the Eurozone crisis resolution scheme. This paper highlights the importance and influence of internal ideational factors on IGOs behavior. From a more practical perspective, this paper may contribute to understanding one of the obstacles in the process of human rights implementation in international organizations, and provide instruments for better protection of social and economic rights.

Keywords: European central bank, eurozone crisis, intergovernmental organizations, organizational culture

Procedia PDF Downloads 155
945 Whey Protein: A Noval Protective Agent against Oto-Toxicity Induced by Cis-Platin in Male Rat

Authors: Eitedal Daoud, Reda M.Daoud, Khaled Abdel-Wahhab, Maha M.Saber, Lobna Saber

Abstract:

Background: Cis-platin is a widely used chemotherapeutic drug to treat many malignant disorders including head and neck malignancies. Oto-nephrotxicity is an important and dose - limiting side effect of cis - platin therapy. Nowadays, more attention had been paid to oto-toxicity caused with cis-platin. Aim of the Work: This study was designed to investigate the potential protective effect of Whey protein (WP) against cis-platin induced ototoxicity compared to the effect of N-acetylcysteine (NAC) in rats. Methodology: Male albino rats were randomly divided into 6 groups: untreated rats (control), rats orally treated with whey protein (1g/kg b.w/day) for seven executive days, rats treated orally with N-acetylcysteine (500 mg/kgb.w /day) for seven executive days, rates intoxicated intraperitoneal (ip) with cis- platin (10 mg/kgb.w. once), rats treated with whey protein (1g/kgb.w./day) for seven executive days) followed by one injection (ip) of cis-platin(10 mg/kg b.w.) one hour after the last oral administration of whey protein, rats treated with N- acetylcysteine (for seven executive days followed by one injection (ip) of cis-platin (10 mg/kgb.w) one hour after the last oral administration of N-acetylcysteine. The organ of Corti, the stria vascularis and spiral ganglia were visualized by light microscopy at different magnifications. Results: Cis-platin intoxicated animals showed a significant decrease in serum level of total antioxidant capacity (TAC),with inhibition in the activity of serum glutathione-s transferase(GST) and paraoxonnase-1 (PON-1) in comparison with control. Group treated with either NAC or WP with cis-platin showed significant elevation in the activity of both GST & PON-1 with increased serum level of TAC when compared with cis-platin intoxicated rats. Animals treated with NAC or WP with cis-platin compared to those treated with cis-platin alone showed marked degree of improvement towards control rats as there was significant drop in the serum level of cortecosterone, nitric oxide (NO), and melandialdehyde (MDA).Histopathologic, in NAC pretreated group there was no changes in stria vascularis or spiral ganglia. In group pretreated with WP, there was no histopathologic alteration detected in the organ of Corti and Reissers membrane but oedema and haemorrhage were founded in the stria vascularis in small focal manner. Conclusion: Our finding showed that Whey protein is a natural dietary supplement product proved its ability of protection of anti-oxidant system and the cochlea against cis-platin induced ototoxicity.

Keywords: anti-oxidant, cis-platin, N-acetylcysteine, ototoxicity, whey protein

Procedia PDF Downloads 524
944 Urban Rehabilitation Assessment: Buildings' Integrity and Embodied Energy

Authors: Joana Mourão

Abstract:

Transition to a low carbon economy requires changes in consumption and production patterns, including the improvement of existing buildings’ environmental performance. Urban rehabilitation is a top policy priority in Europe, creating an opportunity to increase this performance. However, urban rehabilitation comprises different typologies of interventions with distinct levels of consideration for cultural urban heritage values and for environmental values, thus with different impacts. Cities rely on both material and non-material forms of heritage that are deep-rooted and resilient. One of the most relevant parts of that urban heritage is the historical pre-industrial housing stock, with an extensive presence in many European cities, as Lisbon. This stock is rehabilitated and transformed at the framework of urban management and local governance traditions, as well as the framework of the global economy, and in that context, faces opportunities and threats that need evaluation and control. The scope of this article is to define methodological bases and research lines for the assessment of impacts that urban rehabilitation initiatives set on the vulnerable and historical pre-industrial urban housing stock, considering it as an environmental and cultural unreplaceable material value and resource. As a framework, this article reviews the concepts of urban regeneration, urban renewal, current buildings conservation and refurbishment, and energy refurbishment of buildings, seeking to define key typologies of urban rehabilitation that represent different approaches to the urban fabric, in terms of scope, actors, and priorities. Moreover, main types of interventions - basing on a case-study in a XVIII century neighborhood in Lisbon - are defined and analyzed in terms of the elements lost in each type of intervention, and relating those to urbanistic, architectonic and constructive values of urban heritage, as well as to environmental and energy efficiency. Further, the article overviews environmental cultural heritage assessment and life-cycle assessment tools, selecting relevant and feasible impact assessment criteria for urban buildings rehabilitation regulation, focusing on multi-level urban heritage integrity. Urbanistic, architectonic, constructive and energetic integrity are studied as criteria for impact assessment and specific indicators are proposed. The role of these criteria in sustainable urban management is discussed. Throughout this article, the key challenges for urban rehabilitation planning and management, concerning urban built heritage as a resource for sustainability, are discussed and clarified.

Keywords: urban rehabilitation, impact assessment criteria, buildings integrity, embodied energy

Procedia PDF Downloads 196
943 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids

Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino

Abstract:

It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.

Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors

Procedia PDF Downloads 147
942 Correlation between Body Mass Index and Blood Sugar/Serum Lipid Levels in Fourth-Grade Boys in Japan

Authors: Kotomi Yamashita, Hiromi Kawasaki, Satoko Yamasaki, Susumu Fukita, Risako Sakai

Abstract:

Lifestyle-related diseases develop from the long-term accumulation of health consequences from a poor lifestyle. Thus, schoolchildren, who have not accumulated long-term lifestyle habits, are believed to be at a lower risk for lifestyle-related diseases. However, schoolchildren rarely receive blood tests unless they are under treatment for a serious disease; without such data on their blood, the impacts of their young lifestyle could not be known. Blood data from physical measurements can help in the implementation of more effective health education. Therefore, we examined the correlation between body mass index (BMI) and blood sugar/serum lipid (BS/SL) levels. From 2014 to 2016, we measured the blood data of fourth-grade students living in a city in Japan. The present study reported on the results of 281 fourth-grade boys only (80.3% of total). We analyzed their BS/SL levels by comparing the blood data against the criteria of the National Center for Child Health and Development in Japan. Next, we examined the correlation between BMI and BS/SL levels. IBM SPSS Statistics for Windows, Version 25 was used for analysis. A total of 69 boys (24.6%) were within the normal range for BMI (18.5–24), whereas 193 (71.5%) and 8 boys (2.8%) had lower and higher BMI, respectively. Regarding BS levels, 280 boys were within the normal range (70–90 mg/dl); 1 boy reported a higher value. All the boys were within the normal range for glycated Hemoglobin (HbA1c) (4.6–6.2%). Regarding SL levels, 271 boys were within the normal range (125–230 mg/dl) for total cholesterol (TC), whereas 5 boys (1.8%) had lower and 5 boys (1.8%) had higher levels. A total of 243 boys (92.7%) were within the normal range (36-138mg/dL) for triglycerides (TG), whereas 19 boys (7.3%) had lower and 19 boys (7.3%) had higher levels. Regarding high-density lipoprotein cholesterol (HDL-C), 276 boys (98.2%) were within the normal range (40-mg/dl), whereas 5 boys (1.8%) reported lower values. All but one boy (280, 99.6%) were within the normal range (-170 mg/dl) for low-density lipoprotein cholesterol (LDL-C); the exception (0.4%) had a higher level. BMI and BS didn’t show a correlation. BMI and HbA1c were moderately positively correlated (r = 0.139, p=0.019). We also observed moderate positive correlations between BMI and TG (r = 0.328, p < 0.01), TC (r=0.239, p< 0.01), LDL-C (r = 0.324, p < 0.01), respectively. BMI and HDL-C were low correlated (r = -0.185, p = 0.002). Most of the boys were within the normal range for BS/SL levels. However, some boys exceeded the normal TG range. Fourth graders with a high TG may develop a lifestyle-related disease in the future. Given its relation to TG, food habits should be improved in this group. Our findings suggested a positive correlation between BMI and BS/SL levels. Fourth-grade schoolboys with a high BMI may be at high risk for developing lifestyle-related diseases. Lifestyle improvement may be recommended to lower the BS/SL levels in this group.

Keywords: blood sugar level, lifestyle-related diseases, school students, serum lipid level

Procedia PDF Downloads 139
941 A Corpus-Based Analysis of Japanese Learners' English Modal Auxiliary Verb Usage in Writing

Authors: S. Nakayama

Abstract:

For non-native English speakers, using English modal auxiliary verbs appropriately can be among the most challenging tasks. This research sought to identify differences in modal verb usage between Japanese non-native English speakers (JNNS) and native speakers (NS) from two different perspectives: frequency of use and distribution of verb phrase structures (VPS) where modal verbs occur. This study can contribute to the identification of JNNSs' interlanguage with regard to modal verbs; the main aim is to make a suggestion for the improvement of teaching materials as well as to help language teachers to be able to teach modal verbs in a way that is helpful for learners. To address the primary question in this study, usage of nine central modals (‘can’, ‘could’, ‘may’, ‘might’, ‘shall’, ‘should’, ‘will’, ‘would’, and ‘must’) by JNNS was compared with that by NSs in the International Corpus Network of Asian Learners of English (ICNALE). This corpus is one of the largest freely-available corpora focusing on Asian English learners’ language use. The ICNALE corpus consists of four modules: ‘Spoken Monologue’, ‘Spoken Dialogue’, ‘Written Essays’, and ‘Edited Essays’. Among these, this research adopted the ‘Written Essays’ module only, which is the set of 200-300 word essays and contains approximately 1.3 million words in total. Frequency analysis revealed gaps as well as similarities in frequency order. Specifically, both JNNSs and NSs used ‘can’ with the most frequency, followed by ‘should’ and ‘will’; however, usage of all the other modals except for ‘shall’ was not identical to each other. A log-likelihood test uncovered JNNSs’ overuse of ‘can’ and ‘must’ as well as their underuse of ‘will’ and ‘would’. VPS analysis revealed that JNNSs used modal verbs in a relatively narrow range of VPSs as compared to NSs. Results showed that JNNSs used most of the modals with bare infinitives or the passive voice only whereas NSs used the modals in a wide range of VPSs including the progressive construction and the perfect aspect, both of which were the structures where JNNSs rarely used the modals. Results of frequency analysis suggest that language teachers or teaching materials should explain other modality items so that learners can avoid relying heavily on certain modals and have a wide range of lexical items to reflect their feelings more accurately. Besides, the underused modals should be more stressed in the classroom because they are members of epistemic modals, which allow us to not only interject our views into propositions but also build a relationship with readers. As for VPSs, teaching materials should present more examples of the modals occurring in a wide range of VPSs to help learners to be able to express their opinions from a variety of viewpoints.

Keywords: corpus linguistics, Japanese learners of English, modal auxiliary verbs, International Corpus Network of Asian Learners of English

Procedia PDF Downloads 127
940 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 327
939 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 127
938 The Awareness of Cardiovascular Diseases among General Population in Western Regions of Saudi Arabia

Authors: Ali Saeed Alghamdi, Basel Mazen Alsolami, Basel Saeed Alghamdi, Muhanad Saleh Alzahrani Alamri, Salman Anwar Thabet, Abdulhalim J. Kinsara

Abstract:

Objectives: This study measures the knowledge of the cardiovascular disease among the general population in western regions of Saudi Arabia, and it aimed to increase the level of awareness about cardiovascular diseases among the general population by providing an awareness lecture that included information about the risk factors, major symptoms, and prevention of cardiovascular diseases. The lecture has been attached at the end of the questionnaire. Setting: This study was conducted through an online questionnaire that included our aim and main objectives that targeted the general population in the Western regions of Saudi Arabia (Makkah and Madinah regions). Participants: This study participants were 460 collected through an online questionnaire. Methods: All Saudi citizens and residents who live in the western region of Saudi Arabia aged 18 years and above will be invited to participate voluntarily. A pre-structured questionnaire was designed to collect data on age, gender, marital status, education level, occupation, lifestyle habits, and history of heart diseases, with cardiac symptoms and risk factors sections. Results: The majority of respondents were females (74.8%) and Saudis. The knowledge about cardiovascular disease risk factors was weak. Only (18.5%) scores an excellent response regarding risk factors awareness. Lack of exercise, stress, and obesity were the most known risk factors. Regarding cardiovascular disease symptoms, chest pain scores the highest symptom (87.6%) among other symptoms like dyspnea, syncope, and excessive sweating. Participants revealed a poor awareness regarding cardiovascular disease symptoms also (0.9%). However, preventable factors for cardiovascular diseases were more knowledgeable than others categories in this study (60% fall into excellent knowledge). Smoking cessation, normal cholesterol level, and normal blood pressure score the highest preventable methods (92.2%), (88.6%), and (78.7%) respectively. 83.7% of the participant have attended the awareness lecture, and 99 of the attendees reported that the lecture increased their knowledge about cardiovascular disease. Conclusion: This study discussed the level of community awareness of cardiovascular disease in terms of symptoms, risk factors, and protective factors. We found a huge lack of the participant's level of knowledge about the disease and how to prevent it. Moreover, we measure the prevalence of the comorbidities among our participants (diabetes, hypertension, hypercholesterolemia/ hypertriglyceridemia) and their extent of adherence to their medication. In conclusion, this study not only demonstrates awareness of cardiovascular disease risk factors, symptoms, management, and the association between each domain but also provides educational material. Further educational material and campaigns are required to increase awareness and knowledge about cardiovascular diseases.

Keywords: awareness, cardiovascular diseases, education, prevention, risk factors

Procedia PDF Downloads 131
937 Implementation of European Court of Human Right Judgments and State Sovereignty

Authors: Valentina Tereshkova

Abstract:

The paper shows how the relationship between international law and national sovereignty is viewed through the implementation of European Court of Human Right judgments. Methodology: Сonclusions are based on a survey of representatives of the legislative authorities and judges of the Krasnoyarsk region, the Rostov region, Sverdlovsk region and Tver region. The paper assesses the activities of the Russian Constitutional Court from 1998 to 2015 related to the establishment of the implementation mechanism and the Russian Constitutional Court judgments of 14.07.2015, № 21-P and of 19.04.2016, № 12-P where the Constitutional Court stated the impossibility of executing ECtHR judgments. I. Implementation of ECHR judgments by courts and other authorities. Despite the publication of the report of the RF Ministry of Justice on the implementation, we could not find any formal information on the Russian policy of the ECtHR judgment implementation. Using the results of the survey, the paper shows the effect of ECtHR judgments on law and legal practice in Russia. II. Implementation of ECHR judgments by Russian Constitutional Court. Russian Constitutional Court had implemented the ECtHR judgments. However, the Court determined on July, 14, 2015 its competence to consider the question of implementation of ECHR judgments. Then, it stated that the execution of the judgment [Anchugov and Gladkov case] was impossible because the Russian Constitution has the highest legal force on April, 19, 2016. Recently the CE Committee of Ministers asked Russia to provide ‘without further delay’ a compensation plan for the Yukos case. On November 11, 2016, Constitutional Court accepted a request from the Ministry of Justice to consider the possibility of execution of the ECtHR judgment in the Yukos case. Such a request has been made possible due to a lack of implementation mechanism. Conclusion: ECtHR judgments are as an effective tool to solve the structural problems of a legal system. However, Russian experts consider the ECHR as a tool of protection of individual rights. The paper shows link between the survey results and the absence of the implementation mechanism. New Article 104 par. 2 and Article 106 par. 2 of the Federal Law of the Constitutional Court are in conflict with international obligations of the Convention on the Law on Treaties 1969 and Article 46 ECHR. Nevertheless, a dialogue may be possible between Constitutional Court and the ECtHR. In its judgment [19.04.2016] the Constitutional Court determined that the general measures to ensure fairness, proportionality and differentiation of the restrictions of voting rights were possible in judicial practice. It also stated the federal legislator had the power ‘to optimize the system of Russian criminal penalties’. Despite the fact that the Constitutional Court presented the Görgülü case [Görgülü v Germany] as an example of non-execution of the ECtHR judgment, the paper proposes to draw on the experience of German Constitutional Court, which in the Görgülü case, on the one hand, stressed national sovereignty and, on the other hand, took advantage of this sovereignty, to resolve the issue in accordance with the ECHR.

Keywords: implementation of ECtHR judgments, sovereignty, supranational jurisdictions, principle of subsidiarity

Procedia PDF Downloads 194
936 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref

Authors: Kazuki Kohama, Hiroko Ono

Abstract:

The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.

Keywords: disaster prevention, water disaster, river flood, GIS software

Procedia PDF Downloads 139