Search results for: task replication
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2313

Search results for: task replication

303 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 531
302 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria

Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene

Abstract:

As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.

Keywords: capacity, energy, power system, storage

Procedia PDF Downloads 38
301 The Reflexive Interaction in Group Formal Practices: The Question of Criteria and Instruments for the Character-Skills Evaluation

Authors: Sara Nosari

Abstract:

In the research field on adult education, the learning development project followed different itineraries: recently it has promoted adult transformation by practices focused on the reflexive oriented interaction. This perspective, that connects life stories and life-based methods, characterizes a transformative space between formal and informal education. Within this framework, in the Nursing Degree Courses of Turin University, it has been discussed and realized a formal reflexive path on the care work professional identity through group practices. This path compared the future care professionals with possible experiences staged by texts used with the function of a pre-tests: these texts, setting up real or believable professional situations, had the task to start a reflection on the different 'elements' of care work professional life (relationship, educational character of relationship, relationship between different care roles; or even human identity, aims and ultimate aim of care, …). The learning transformative aspect of this kind of experience-test is that it is impossible to anticipate the process or the conclusion of reflexion because they depend on two main conditions: the personal sensitivity and the specific situation. The narrated experience is not a device, it does not include any tricks to understand the answering advance; the text is not aimed at deepening the knowledge, but at being an active and creative force which takes the group to compare with problematic figures. In fact, the experience-text does not have the purpose to explain but to problematize: it creates a space of suspension to live for questioning, for discussing, for researching, for deciding. It creates a space 'open' and 'in connection' where each one, in comparing with others, has the possibility to build his/her position. In this space, everyone has to possibility to expose his/her own argumentations and to be aware of the others emerged points of view, aiming to research and find the own personal position. However, to define his/her position, it is necessary to learn to exercise character skills (conscientiousness, motivation, creativity, critical thinking, …): if these not-cognitive skills have an undisputed evidence, less evident is how to value them. The paper will reflect on the epistemological limits and possibility to 'measure' character skills, suggesting some evaluation criteria.

Keywords: transformative learning, educational role, formal/informal education, character-skills

Procedia PDF Downloads 195
300 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 174
299 Nanomaterials for Archaeological Stone Conservation: Re-Assembly of Archaeological Heavy Stones Using Epoxy Resin Modified with Clay Nanoparticles

Authors: Sayed Mansour, Mohammad Aldoasri, Nagib Elmarzugi, Nadia A. Al-Mouallimi

Abstract:

The archaeological large stone used in construction of ancient Pharaonic tombs, temples, obelisks and other sculptures, always subject to physicomechanical deterioration and destructive forces, leading to their partial or total broken. The task of reassembling this type of artifact represent a big challenge for the conservators. Recently, the researchers are turning to new technologies to improve the properties of traditional adhesive materials and techniques used in re-assembly of broken large stone. The epoxy resins are used extensively in stone conservation and re-assembly of broken stone because of their outstanding mechanical properties. The introduction of nanoparticles to polymeric adhesives at low percentages may lead to substantial improvements of their mechanical performances in structural joints and large objects. The aim of this study is to evaluate the effectiveness of clay nanoparticles in enhancing the performances of epoxy adhesives used in re-assembly of archaeological massive stone by adding proper amounts of those nanoparticles. The nanoparticles reinforced epoxy nanocomposite was prepared by direct melt mixing with a nanoparticles content of 3% (w/v), and then mould forming in the form of rectangular samples, and used as adhesive for experimental stone samples. Scanning electron microscopy (SEM) was employed to investigate the morphology of the prepared nanocomposites, and the distribution of nanoparticles inside the composites. The stability and efficiency of the prepared epoxy-nanocomposites and stone block assemblies with new formulated adhesives were tested by aging artificially the samples under different environmental conditions. The effect of incorporating clay nanoparticles on the mechanical properties of epoxy adhesives was evaluated comparatively before and after aging by measuring the tensile, compressive, and Elongation strength tests. The morphological studies revealed that the mixture process between epoxy and nanoparticles has succeeded with a relatively homogeneous morphology and good dispersion in low nano-particles loadings in epoxy matrix was obtained. The results show that the epoxy-clay nanocomposites exhibited superior tensile, compressive, and Elongation strength. Moreover, a marked improvement of the mechanical properties of stone joints increased in all states by adding nano-clay to epoxy in comparison with pure epoxy resin.

Keywords: epoxy resins, nanocomposites, clay nanoparticles, re-assembly, archaeological massive stones, mechanical properties

Procedia PDF Downloads 113
298 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 73
297 "IS Cybernetics": An Idea to Base the International System Theory upon the General System Theory and Cybernetics

Authors: Petra Suchovska

Abstract:

The spirit of post-modernity remains chaotic and obscure. Geopolitical rivalries raging at the more extreme levels and the ability of intellectual community to explain the entropy of global affairs has been diminishing. The Western-led idea of globalisation imposed upon the world does not seem to bring the bright future for human progress anymore, and its architects lose much of global control, as the strong non-western cultural entities develop new forms of post-modern establishments. The overall growing cultural misunderstanding and mistrust are expressions of political impotence to deal with the inner contradictions within the contemporary phenomenon (capitalism, economic globalisation) that embrace global society. The drivers and effects of global restructuring must be understood in the context of systems and principles reflecting on true complexity of society. The purpose of this paper is to set out some ideas about how cybernetics can contribute to understanding international system structure and analyse possible world futures. “IS Cybernetics” would apply to system thinking and cybernetic principles in IR in order to analyse and handle the complexity of social phenomena from global perspective. “IS cybernetics” would be, for now, the subfield of IR, concerned with applying theories and methodologies from cybernetics and system sciences by offering concepts and tools for addressing problems holistically. It would bring order to the complex relations between disciplines that IR touches upon. One of its tasks would be to map, measure, tackle and find the principles of dynamics and structure of social forces that influence human behaviour and consequently cause political, technological and economic structural reordering, forming and reforming the international system. “IS cyberneticists” task would be to understand the control mechanisms that govern the operation of international society (and the sub-systems in their interconnection) and only then suggest better ways operate these mechanisms on sublevels as cultural, political, technological, religious and other. “IS cybernetics” would also strive to capture the mechanism of social-structural changes in time, which would open space for syntheses between IR and historical sociology. With the cybernetic distinction between first order studies of observed systems and the second order study of observing systems, IS cybernetics would also provide a unifying epistemological and methodological, conceptual framework for multilateralism and multiple modernities theory.

Keywords: cybernetics, historical sociology, international system, systems theory

Procedia PDF Downloads 232
296 Use and Effects of Kanban Board from the Aspects of Brothers Furniture Limited

Authors: Kazi Rizvan, Yamin Rekhu

Abstract:

Due to high competitiveness in industries throughout the world, every industry is trying hard to utilize all their resources to keep their productivity as high as possible. Many tools have been being used to ensure smoother flow of an operation, to balance tasks, to maintain proper schedules for tasks, to maintain proper sequence for tasks, to reduce unproductive time. All of these tools are used to augment productivity within an industry. Kanban board is one of them and of the many important tools of lean production system. Kanban Board is a visual depiction of the status of tasks. Kanban board shows the actual status of the tasks. It conveys the progress and issues of tasks as well. Using Kanban Board, tasks can be distributed among workers and operation targets can be visually represented to them. In this paper, an example of Kanban board from the aspects of Brothers Furniture Limited was taken and how the Kanban board system was implemented, how the board was designed and how it was made easily perceivable for the less literate or illiterate workers. The Kanban board was designed for the packing section of Brothers Furniture Limited. It was implemented for the purpose of representing the tasks flow to the workers and to mitigate the time that was wasted while the workers remained wondering about what task they should start after they finish one. Kanban board subsumed seven columns and there was a column for comments where if any problem occurred during working on the tasks. Kanban board was helpful for the workers as the board showed the urgency of the tasks. It was also helpful for the store section as they could understand which products and how much of them could be delivered to store at any certain time. Kanban board had all the information centralized which is why the work-flow got paced up and idle time was minimized. Regardless of many workers being illiterate or less literate, Kanban board was still explicable for the workers as the Kanban cards were colored. Since the significance of colors can be conveniently interpretable to them, colored cards helped a great deal in that matter. Hence, the illiterate or less literate workers didn’t have to spend time wondering about the significance of the cards. Even when the workers weren’t told the significance of the colored cards, they could grow a feeling about their meaning as colors can trigger anyone’s mind to perceive the situation. As a result, the board elucidated the workers about what board required them to do, when to do and what to do next. Kanban board alleviated excessive time between tasks by setting day-plan for targeted tasks and it also reduced time during tasks as the workers were acknowledged of forthcoming tasks for a day. Being very specific to the tasks, Kanban board helped the workers become more focused on their tasks helped them do their job with more perfection. As a result, The Kanban board helped achieve a 8.75% increase in productivity than the productivity before the Kanban board was implemented.

Keywords: color, Kanban Board, Lean Tool, literacy, packing, productivity

Procedia PDF Downloads 233
295 Constitutional Courts as Positive Legislators: The Role of Indonesian Constitutional Court in Interpreting and Applying the Constitution

Authors: Masnur Marzuki

Abstract:

As in other democratic countries, the constitutional court of Indonesia has the role of interpreting and applying the Constitution in order to preserve its supremacy testing the constitutionality of statutes. With its strong power to enforce and guard the Constitution, the court is now challenged to provide people an opportunity to understand their constitutional rights close up. At the same time, the court has built up an enviable reputation among constitutional courts in new democracies for the technical quality of its legitimacy in the legal sense. Since its establishment in 2003, the Constitutional Court of Indonesia has decided more than 190 statutes in judicial review case. It has been remarkably successful to make a credible start on its work of guarding the Constitution. Unsurprisingly, many argue that the Court has elevated Indonesia’s democracy to a whole new level. In accomplishing its roles judicial review, the basic principle that can be identified is that the Constitutional Court must always be subordinated to the Constitution. It is not being allowed to invade the field of the legislator. In doing so, the court does not have any discretionary political basis in order to create legal norms or provisions that could not be deducted from the Constitution itself. When interpreting a statute “in accordance with the constitution”, the court recognizes and reasserts that it is strictly forbidden to extend the scope of a legal provision in such a way that would create a general norm not established by the law-maker. This paper aims to identify and assess the latest role of Indonesian Constitutional Court in interpreting and applying the Constitution. In particular, it questions 1) the role of the Constitutional Court in judicial review; and 2) the role of the court to assist the legislators in the accomplishment of their functions in order to preserve its supremacy testing the constitutionality of statutes. Concerning positive legislator, jurisprudential and judicial review theories will be approached. The empirical part will include qualitative and comparative research. Main questions to be addressed: Can the Constitutional Court be functionalized as positive legislator? What are the criteria for conducting role of Constitutional Courts as Positive Legislators and how can it be accepted? Concerning the subordination of Constitutional Courts to the Constitution and judicial review, both qualitative and quantitative methods will be used, and differences between Indonesia and German Constitutional Court will be observed. Other questions to be addressed: Can Constitutional Courts have any discretionary political basis in order to create legal norms or provisions that could not be deducted from the Constitution itself. Should the Constitutional Court always act as a negative legislator? However, the Constitutional Court in Indonesia has played role as positive legislators which create dynamic of Indonesian legal development. In performing the task of reviewing the constitutionality of statutes, the Constitutional Court has created legal norms or provisions that could be deducted from the Constitution itself.

Keywords: constitution, court, law, rights

Procedia PDF Downloads 425
294 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
293 Evaluation of Health Services after Emergency Decrees in Turkey

Authors: Sengul Celik, Alper Ketenci

Abstract:

In Turkish Constitution about health care in Article 56, it is said that: everyone has the right to live in a healthy and balanced environment. It is the duty of the state and citizens to improve the environment, protect environmental health, and prevent environmental pollution. The state ensures that everyone lives their lives in physical and mental health; it organizes the planning and service of health institutions from a single source in order to realize cooperation by increasing savings and efficiency in human and substance power. The state fulfills this task by utilizing and supervising health and social institutions in the public and private sectors. General health insurance can be established by law for the widespread delivery of health services. To have health care is one of the basic rights of patients. After the coupe attempt in July 2016, the Government of Turkey has announced a state of emergency and issued lots of emergency decrees. By these emergency decrees, lots of people were dismissed from their jobs and lost their some basic social rights. The violations occur in social life. One of the most common observations is the discrimination by government in health care system. This study aims to put forward the violation of human rights in health care system in Turkey due to their discriminated position by an emergency decree. The study is a case study that is based on nine interviews with the people or relatives of people who lost their jobs by an emergency decree in Turkey. In this study, no personally identifiable information was obtained for the safety of individuals. Also no distinctive questions regarding the identity of individuals were asked. The interviews are obtained through internet call applications. The data were analyzed through the requirements of regular health care system in Turkey. The interviews expose that the people or the relatives of people lost their right to have regular health care. They have to pay extra amount both in clinical services and in medication treatment. The patient right to quality medical care without prejudice is violated. It was assessed that the people who are involved in emergency decree and their relatives are discriminated by government and deprived of regular medical care and supervision. Although international legal arrangements and legal responsibilities of the state have been put forward by Article 56, they are violated in practice. To prevent these kinds of violations, some measures should be taken against the deprivation in health care system especially towards the discriminated people by an emergency decree.

Keywords: emergency decree in Turkey, health care, discriminated people, patients rights

Procedia PDF Downloads 111
292 The Role of Situational Attribution Training in Reducing Automatic In-Group Stereotyping in Females

Authors: Olga Mironiuk, Małgorzata Kossowska

Abstract:

The aim of the present study was to investigate the influence of Situational Attribution Training on reducing automatic in-group stereotyping in females. The experiment was conducted with the control of age and level of prejudice. 90 female participants were randomly assigned to two conditions: experimental and control group (each group was also divided into younger- and older-aged condition). Participants from the experimental condition were subjected to more extensive training. In the first part of the experiment, the experimental group took part in the first session of Situational Attribution Training while the control group participated in the Grammatical Training Control. In the second part of the research both groups took part in the Situational Attribution Training (which was considered as the second training session for the experimental group and the first one for the control condition). The training procedure was based on the descriptions of ambiguous situations which could be explained using situational or dispositional attributions. The participant’s task was to choose the situational explanation from two alternatives, out of which the second one presented the explanation based on neutral or stereotypically associated with women traits. Moreover, the experimental group took part in the third training session after two- day time delay, in order to check the persistence of the training effect. The main hypothesis stated that among participants taking part in the more extensive training, the automatic in-group stereotyping would be less frequent after having finished training sessions. The effectiveness of the training was tested by measuring the response time and the correctness of answers: the longer response time for the examples where one of two possible answers was based on the stereotype trait and higher correctness of answers was considered to be a proof of the training effectiveness. As the participants’ level of prejudice was controlled (using the Ambivalent Sexism Inventory), it was also assumed that the training effect would be weaker for participants revealing a higher level of prejudice. The obtained results did not confirm the hypothesis based on the response time: participants from the experimental group responded faster in case of situations where one of the possible explanations was based on stereotype trait. However, an interesting observation was made during the analysis of the answers’ correctness: regardless the condition and age group affiliation, participants made more mistakes while choosing the situational explanations when the alternative was based on stereotypical trait associated with the dimension of warmth. What is more, the correctness of answers was higher in the third training session for the experimental group in case when the alternative of situational explanation was based on the stereotype trait associated with the dimension of competence. The obtained results partially confirm the effectiveness of the training.

Keywords: female, in-group stereotyping, prejudice, situational attribution training

Procedia PDF Downloads 190
291 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 99
290 A Service-Learning Experience in the Subject of Adult Nursing

Authors: Eva de Mingo-Fernández, Lourdes Rubio Rico, Carmen Ortega-Segura, Montserrat Querol-García, Raúl González-Jauregui

Abstract:

Today, one of the great challenges that the university faces is to get closer to society and transfer knowledge. The competency-based training approach favours a continuous interaction between practice and theory, which is why it is essential to establish real experiences with reflection and debate and to contrast them with personal and professional knowledge. Service-learning (SL) consists of an integration of academic learning with service in the community, which enables teachers to transfer knowledge with social value and students to be trained on the basis of experience of real needs and problems with the aim of solving them. SLE combines research, teaching, and social value knowledge transfer with the real social needs and problems of a community. Goal: The objective of this study was to design, implement, and evaluate a service-learning program in the subject of adult nursing for second-year nursing students. Methodology: After establishing collaboration with eight associations of people with different pathologies, the students were divided into eight groups, and each group was assigned an association. The groups were made up of 10-12 students. The associations willing to participate were for the following conditions: diabetes, multiple sclerosis, cancer, inflammatory bowel disease, fibromyalgia, heart, lung, and kidney diseases. The methodological design consisting of 5 activities was then applied. Three activities address personal and individual reflections, where the student initially describes what they think it is like to live with a certain disease. They then express their reflections resulting from an interview conducted by peers, in person or online, with a person living with this particular condition, and after sharing the results of their reflections with the rest of the group, they make an oral presentation in which they present their findings to the other students. This is followed by a service task in which the students collaborate in different activities of the association, and finally, a third individual reflection is carried out in which the students express their experience of collaboration. The evaluation of this activity is carried out by means of a rubric for both the reflections and the presentation. It should be noted that the oral presentation is evaluated both by the rest of the classmates and by the teachers. Results: The evaluation of the activity, given by the students, is 7.80/10, commenting that the experience is positive and brings them closer to the reality of the people and the area.

Keywords: academic learning integration, knowledge transfer, service-learning, teaching methodology

Procedia PDF Downloads 72
289 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability

Authors: Anastasios Ntousas

Abstract:

The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.

Keywords: morphology, orthography, reading ability, reading comprehension

Procedia PDF Downloads 129
288 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 213
287 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 212
286 The Effect of Seated Distance on Muscle Activation and Joint Kinematics during Seated Strengthening in Patients with Stroke with Extensor Synergy Pattern in the Lower Limbs

Authors: Y. H. Chen, P. Y. Chiang, T. Sugiarto, I. Karsuna, Y. J. Lin, C. C. Chang, W. C. Hsu

Abstract:

Task-specific training with intense practice of functional tasks has been emphasized for the approaches in motor rehabilitation in patients with hemiplegic strokes. Although reciprocal actions which may increase demands on motor control during seated stepping exercise, motor control is not explicitly trained with emphasis and instruction focused on traditional strengthening. Apart from cycling and treadmill, various forms of seated exerciser are becoming available for the lower extremity exercise. The benefit of seated exerciser has been focused on the effect on the cardiopulmonary system. Thus, the aim of current study is to investigate the effect of seated distance on muscle activation during seated strengthening in patients with stroke with extensor synergy pattern in the lower extremities. Electrodes were placed on the surface of lower limbs muscles, including rectus femoris (RF), vastus lateralis (VL), biceps femoris (BF) and gastrocnemius (GT) of both sides. Maximal voluntary contraction (MVC) of the muscles were obtained to normalize the EMG amplitude obtained during dynamic trials with analog raw data digitized with a sampling frequency of 2000 Hz, fully rectified and the linear enveloped. Movement cycle was separated into two phases by pushing (PP) and Return (RP). Integral EMG (iEMG) is then used to quantify level of activation during each of the phases. Subjects performed strengthening with moderate resistance with speed of 60 rpm in two different distances (D1, short) and (D2, long). The results showed greater iEMG in RF and smaller iEMG in VL and BF with obvious increase range of motion of hip flexion in D1 condition. On the contrary, no significant involvement of RF while greater level of muscular activation in VL and BF during RP was found during PP in D2 condition. In addition, greater hip internal rotation was observed in D2 condition. In patients with stroke with abnormal tone revealed by extensor synergy in the lower extremities, shorter seated distance is suggested to facilitate hip flexor muscle activation while avoid inducing hyper extensor tone which may prevent a smooth repetitive motion. Repetitive muscular contraction exercise of hip flexor may be helpful for further gait training as it may assist hip flexion during swing phase of the walking.

Keywords: seated strengthening, patients with stroke, electromyography, synergy pattern

Procedia PDF Downloads 215
285 A Study Investigating Word Association Behaviour in People with Acquired Language and Communication Disorders

Authors: Angela Maria Fenu

Abstract:

The aim of this study was to better characterize the nature of word association responses in people with aphasia. The participants selected for the experimental group were 4 individuals with mild Broca’s aphasia. The control group consisted of 51 cognitively intact age- and gender-matched individuals. The participants were asked to perform a word association task in which they had to say the first word they thought of when hearing each cue. The cue words (n= 16) were the translation in Italian of the set of English cue words of a published study. The participants from the experimental group were administered the word association test every two weeks for a period of two months when they received speech-language therapy A combination of analytical approaches to measure the data was used. To analyse different patterns of word association responses in both groups, the nature of the relationship between the cue and the response was examined: responses were divided into five categories of association. To investigate the similarity between aphasic and non-aphasic subjects, the stereotypy of responses was examined.While certain stimulus words (nouns, adjectives) elicited responses from Broca’s aphasics that tended to resemble those made by non-aphasic subjects; others (adverbs, verbs) showed the tendency to elicit responses different from the ones given by normal subjects. This suggests that some mechanisms underlying certain types of associations are degraded in aphasics individuals, while others display little evidence of disruption. The high number of paradigmatic associations given in response to a noun or an adjective might imply that the mechanisms, largely semantic, underlying paradigmatic associations are relatively preserved in Broca’s aphasia, but it might also mean that some words are more easily processed depending on their grammatical class (nouns, adjectives). The most significant variation was noticed when the grammatical class of the cue word was an adverb. Unlike the normal individuals, the experimental subjects gave the most idiosyncratic associations, which are often produced when the attempt to give a paradigmatic response fails. In turn, the failure to retrieve paradigmatic responses when the cue is an adverb might suggest that Broca’s aphasics are more sensitive to this grammatical class.The findings from this study suggest that, from research on word associations in people with aphasia, important data can arise concerning the specific lexical retrieval impairments that characterize the different types of aphasia and the various treatments that might positively influence the kinds of word association responses affected by language disruption.

Keywords: aphasia therapy, clinical linguistics, word-association behaviour, mental lexicon

Procedia PDF Downloads 90
284 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 274
283 A Geometric Based Hybrid Approach for Facial Feature Localization

Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik

Abstract:

Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.

Keywords: biometrics, face recognition, facial landmarks, image processing

Procedia PDF Downloads 413
282 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho

Abstract:

Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.

Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem

Procedia PDF Downloads 297
281 Implementing Peer Mediated Interventions with Visual Supports for Social Skills Development in a School-Based Work Setting with Secondary Students with Autism

Authors: Karen Eastman

Abstract:

More youths and young adults with autism spectrum disorder (ASD) have been entering the workforce in recent years. Historically, students with ASD struggle after leaving high school and experience lower rates of employment, with social skills continuing to be the most problematic area of concern. Special education teachers may find it challenging to identify effective combinations of evidence-based practices (EBPs) and supports to best guide these students. One EBP, Peer Mediated Instruction and Intervention (PMII) has been well documented in the literature as being effective for younger students with autism but not researched as much with older students and adults, particularly in work settings. A need to combine PMII with other EBPs has been identified as a way to achieve a greater positive impact rather than any practice alone. A multiple baseline across skills design was used in this research project with two participants in different settings. PMII was combined with Visual Supports, with typical peers being trained in both practices. PMII is an evidence-based practice used to address social concerns by training peers without disabilities as to how they can provide feedback to and support, the student with ASD with social interactions in structured settings. The peers without disabilities were the instructors, while the adults facilitated the social situations and provided support to both the peers and students with ASD when needed. Because many individuals with ASD learn best with visual input, rather than using only the spoken word (verbal directions and feedback), Visual Supports were used in conjunction with PMII. Visual Supports can include written words, pictures, symbols, videos, or objects. In this project, the Visual Supports used were written social scripts, videos, Stop and Think signs, written reminder cards, a school map, and a pictorial task analysis of work tasks. Variables that may affect intervention outcomes in this project included attendance at school and school-based work settings for both the students with ASD and the peers without disabilities and behaviors and responses from others in the settings. Qualitative data was also collected from observations and surveys with peers about the process and their role. Data indicated that the students with ASD responded more positively to redirection and support from their peers than to teachers and staff and showed an increase in positive interactions with others. Those surveyed indicated a positive attitude toward and response to the use of peer interventions with visual supports.

Keywords: autism, social skills, vocational training, peer interventions

Procedia PDF Downloads 43
280 Term Creation in Specialized Fields: An Evaluation of Shona Phonetics and Phonology Terminology at Great Zimbabwe University

Authors: Peniah Mabaso-Shamano

Abstract:

The paper evaluates Shona terms that were created to teach Phonetics and Phonology courses at Great Zimbabwe University (GZU). The phonetics and phonology terms to be discussed in this paper were created using different processes and strategies such as translation, borrowing, neologising, compounding, transliteration, circumlocution among many others. Most phonetics and phonology terms are alien to Shona and as a result, there are no suitable Shona equivalents. The lecturers and students for these courses have a mammoth task of creating terminology for the different modules offered in Shona and other Zimbabwean indigenous languages. Most linguistic reference books are written in English. As such, lecturers and students translate information from English to Shona, a measure which is proving to be too difficult for them. A term creation workshop was held at GZU to try to address the problem of lack of terminology in indigenous languages. Different indigenous language practitioners from different tertiary institutions convened for a two-day workshop at GZU. Due to the 'specialized' nature of phonetics and phonology, it was too difficult to come up with 'proper' indigenous terms. The researcher will consult tertiary institutions lecturers who teach linguistics courses and linguistics students to get their views on the created terms. The people consulted will not be the ones who took part in the term creation workshop held at GZU. The selected participants will be asked to evaluate and back-translate some of the terms. In instances where they feel the terms created are not suitable or user-friendly, they will be asked to suggest other terms. Since the researcher is also a linguistics lecturer, her observation and views will be important. From her experience in using some of the terms in teaching phonetics and phonology courses to undergraduate students, the researcher noted that most of the terms created have shortcomings since they are not user-friendly. These shortcomings include terms longer than the English terms as some terms are translated to Shona through a whole statement. Most of these terms are neologisms, compound neologisms, transliterations, circumlocutions, and blends. The paper will show that there is overuse of transliterated terms due to the lack of Shona equivalents for English terms. Most single English words were translated into compound neologisms or phrases after attempts to reduce them to one word terms failed. In other instances, circumlocution led to the problem of creating longer terms than the original and as a result, the terms are not user-friendly. The paper will discuss and evaluate the different phonetics and phonology terms created and the different strategies and processes used in creating them.

Keywords: blending, circumlocution, term creation, translation

Procedia PDF Downloads 148
279 An Observational Study Assessing the Baseline Communication Behaviors among Healthcare Professionals in an Inpatient Setting in Singapore

Authors: Pin Yu Chen, Puay Chuan Lee, Yu Jen Loo, Ju Xia Zhang, Deborah Teo, Jack Wei Chieh Tan, Biauw Chi Ong

Abstract:

Background: Synchronous communication, such as telephone calls, remains the standard communication method between nurses and other healthcare professionals in Singapore public hospitals despite advances in asynchronous technological platforms, such as instant messaging. Although miscommunication is one of the most common causes of lapses in patient care, there is a scarcity of research characterizing baseline inter-professional healthcare communications in a hospital setting due to logistic difficulties. Objective: This study aims to characterize the frequency and patterns of communication behaviours among healthcare professionals. Methods: The one-week observational study was conducted on Monday through Sunday at the nursing station of a cardiovascular medicine and cardiothoracic surgery inpatient ward at the National Heart Centre Singapore. Subjects were shadowed by two physicians for sixteen hours or consecutive morning and afternoon nursing shifts. Communications were logged and characterized by type, duration, caller, and recipient. Results: A total of 1,023 communication events involving the attempted use of the common telephones at the nursing station were logged over a period of one week, corresponding to a frequency of one event every 5.45 minutes (SD 6.98, range 0-56 minutes). Nurses initiated the highest proportion of outbound calls (38.7%) via the nursing station common phone. A total of 179 face-to-face communications (17.5%), 362 inbound calls (35.39%), 481 outbound calls (47.02%), and 1 emergency alert (0.10%) were captured. Average response time for task-oriented communications was 159 minutes (SD 387.6, range 86-231). Approximately 1 in 3 communications captured aimed to clarify patient-related information. The total duration of time spent on synchronous communication events over one week, calculated from total inbound and outbound calls, was estimated to be a total of 7 hours. Conclusion: The results of our study showed that there is a significant amount of time spent on inter-professional healthcare communications via synchronous channels. Integration of patient-related information and use of asynchronous communication channels may help to reduce the redundancy of communications and clarifications. Future studies should explore the use of asynchronous mobile platforms to address the inefficiencies observed in healthcare communications.

Keywords: healthcare communication, healthcare management, nursing, qualitative observational study

Procedia PDF Downloads 211
278 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom

Authors: Michael Hast

Abstract:

Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.

Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight

Procedia PDF Downloads 190
277 Mechanisms Underlying Comprehension of Visualized Personal Health Information: An Eye Tracking Study

Authors: Da Tao, Mingfu Qin, Wenkai Li, Tieyan Wang

Abstract:

While the use of electronic personal health portals has gained increasing popularity in the healthcare industry, users usually experience difficulty in comprehending and correctly responding to personal health information, partly due to inappropriate or poor presentation of the information. The way personal health information is visualized may affect how users perceive and assess their personal health information. This study was conducted to examine the effects of information visualization format and visualization mode on the comprehension and perceptions of personal health information among personal health information users with eye tracking techniques. A two-factor within-subjects experimental design was employed, where participants were instructed to complete a series of personal health information comprehension tasks under varied types of visualization mode (i.e., whether the information visualization is static or dynamic) and three visualization formats (i.e., bar graph, instrument-like graph, and text-only format). Data on a set of measures, including comprehension performance, perceptions, and eye movement indicators, were collected during the task completion in the experiment. Repeated measure analysis of variance analyses (RM-ANOVAs) was used for data analysis. The results showed that while the visualization format yielded no effects on comprehension performance, it significantly affected users’ perceptions (such as perceived ease of use and satisfaction). The two graphic visualizations yielded significantly higher favorable scores on subjective evaluations than that of the text format. While visualization mode showed no effects on users’ perception measures, it significantly affected users' comprehension performance in that dynamic visualization significantly reduced users' information search time. Both visualization format and visualization mode had significant main effects on eye movement behaviors, and their interaction effects were also significant. While the bar graph format and text format had similar time to first fixation across dynamic and static visualizations, instrument-like graph format had a larger time to first fixation for dynamic visualization than for static visualization. The two graphic visualization formats yielded shorter total fixation duration compared with the text-only format, indicating their ability to improve information comprehension efficiency. The results suggest that dynamic visualization can improve efficiency in comprehending important health information, and graphic visualization formats were favored more by users. The findings are helpful in the underlying comprehension mechanism of visualized personal health information and provide important implications for optimal design and visualization of personal health information.

Keywords: eye tracking, information comprehension, personal health information, visualization

Procedia PDF Downloads 109
276 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 125
275 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 209
274 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 66