Search results for: open access tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9615

Search results for: open access tools

1395 Modern Agriculture and Employment Generation in Nigeria: A Recursive Model Approach

Authors: Ese Urhie, Olabisi Popoola, Obindah Gershon

Abstract:

Several policies and programs initiated to address the challenge of unemployment in Nigeria seem to be inadequate. The desired structural transformation which is expected to absorb the excess labour in the economy is yet to be achieved. The agricultural sector accounts for almost half of the labour force with very low productivity. This could partly explain why the much anticipated structural transformation has not been achieved. A major reason for the low productivity is the fact that the production process is predominantly based on the use of traditional tools. In view of the underdeveloped nature of the agricultural sector, Nigeria still has huge potentials for productivity enhancement through modern technology. Aside from productivity enhancement, modern agriculture also stimulates both backward and forward linkages that promote investment and thus generate employment. Contrary to the apprehension usually expressed by many stake-holders about the adoption of modern technology by labour-abundant less-developed countries, this study showed that though there will be job loss initially, the reverse will be the case in the long-run. The outcome of this study will enhance the understanding of all stakeholders in the sector and also encourage them to adopt modern techniques of farming. It will also aid policy formulation at both sectoral and national levels. The recursive model and analysis adopted in the study is useful because it exhibits a unilateral cause-and-effect relationship which most simultaneous equation models do not. It enables the structural equations to be ordered in such a way that the first equation includes only predetermined variables on the right-hand side, while the solution for the final endogenous variable is completely determined by all equations of the system. The study examines the transmission channels and effect of modern agriculture on agricultural productivity and employment growth in Nigeria, via its forward and backward linkages. Using time series data spanning 1980 to 2014, the result of the analyses shows that: (i) a significant and positive relationship between agricultural productivity growth and modern agriculture; (ii) a significant and negative relationship between export price index and agricultural productivity growth; (iii) a significant and positive relationship between export and investment; and (iv) a significant and positive relationship between investment and employment growth. The unbalanced growth theory will be a good strategy to adopt by developing countries such as Nigeria.

Keywords: employment, modern agriculture, productivity, recursive model

Procedia PDF Downloads 262
1394 The Existential in a Practical Phenomenology Research: A Study on the Political Participation of Young Women

Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso

Abstract:

This communication presents proposed questions about the existential in research on the political participation of young women. The study follows a qualitative methodology, in particular, the applied hermeneutic phenomenological (AHP) method, and the general objective of the research is to give an account of the experience of political participation as a young woman. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. The ultimate objective of HP is to gain access to the meaning structures of lived experience by appropriating them, clarifying them, and reflectively making them explicit. Human experiences are always lived through existential: fundamental themes that are useful in exploring meaningful aspects of our life worlds. Everyone experiences the world through the existential of lived relationships, the lived body, lived space, lived time, and lived things. The phenomenological research, then, also tacitly asks about the existential. Existentials are universal themes useful for exploring significant aspects of our life world and of the particular phenomena under study. Four main existentials prove especially helpful as guides for reflection in the research process: relationship, body, space, and time. For example, in our case, we may ask ourselves how can the existentials of relationship, body, space, and time guide us in exploring the structures of meaning in the lived experience of political participation as a woman and a young person. The study is still not finished, as we are currently conducting phenomenological thematic analysis on the collected stories of lived experience. Yet, we have already identified some fragments of texts that show the existential in their experiences, which we will transcribe below. 1) Relationality - The experienced I-Other. It regards how relationships are experienced in our narratives about political participation as young women. One example would be: “As we had known each other for a long time, we understood each other with our eyes; we were all a little bit on the same page, thinking the same thing.” 2) Corporeality - The lived body. It regards how the lived body is experienced in activities of political participation as a young woman. One example would be: “My blood was boiling, but it was not the time to throw anything in their face, we had to look for solutions.”; “I had a lump in my throat and I wanted to cry.”. 3) Spatiality - The lived space. It regards how one experiences the lived space in political participation activities as a young woman. One example would be: “And the feeling I got when I saw [it] it's like watching everybody going into a mousetrap.” 4) Temporality - Lived time. It regards how one experiences the lived time in political participation activities as a young woman. One example would be: “Then, there were also meetings that went on forever…”

Keywords: applied hermeneutic phenomenology, existentials, hermeneutics, phenomenology, political participation

Procedia PDF Downloads 82
1393 Spanish Language Violence Corpus: An Analysis of Offensive Language in Twitter

Authors: Beatriz Botella-Gil, Patricio Martínez-Barco, Lea Canales

Abstract:

The Internet and ICT are an integral element of and omnipresent in our daily lives. Technologies have changed the way we see the world and relate to it. The number of companies in the ICT sector is increasing every year, and there has also been an increase in the work that occurs online, from sending e-mails to the way companies promote themselves. In social life, ICT’s have gained momentum. Social networks are useful for keeping in contact with family or friends that live far away. This change in how we manage our relationships using electronic devices and social media has been experienced differently depending on the age of the person. According to currently available data, people are increasingly connected to social media and other forms of online communication. Therefore, it is no surprise that violent content has also made its way to digital media. One of the important reasons for this is the anonymity provided by social media, which causes a sense of impunity in the victim. Moreover, it is not uncommon to find derogatory comments, attacking a person’s physical appearance, hobbies, or beliefs. This is why it is necessary to develop artificial intelligence tools that allow us to keep track of violent comments that relate to violent events so that this type of violent online behavior can be deterred. The objective of our research is to create a guide for detecting and recording violent messages. Our annotation guide begins with a study on the problem of violent messages. First, we consider the characteristics that a message should contain for it to be categorized as violent. Second, the possibility of establishing different levels of aggressiveness. To download the corpus, we chose the social network Twitter for its ease of obtaining free messages. We chose two recent, highly visible violent cases that occurred in Spain. Both of them experienced a high degree of social media coverage and user comments. Our corpus has a total of 633 messages, manually tagged, according to the characteristics we considered important, such as, for example, the verbs used, the presence of exclamations or insults, and the presence of negations. We consider it necessary to create wordlists that are present in violent messages as indicators of violence, such as lists of negative verbs, insults, negative phrases. As a final step, we will use automatic learning systems to check the data obtained and the effectiveness of our guide.

Keywords: human language technologies, language modelling, offensive language detection, violent online content

Procedia PDF Downloads 126
1392 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges

Authors: Mohamad Mahdi Namdari

Abstract:

In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.

Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing

Procedia PDF Downloads 35
1391 Education for Sustainable Development Pedagogies: Examining the Influences of Context on South African Natural Sciences and Technology Teaching and Learning

Authors: A. U. Ugwu

Abstract:

Post-Apartheid South African education system had witnessed waves of curriculum reforms. Accordingly, there have been evidences of responsiveness towards local and global challenges of sustainable development over the past decade. In other words, the curriculum shows sensitivity towards issues of Sustainable Development (SD). Moreover, the paradigm of Sustainable Development Goals (SDGs) was introduced by the UNESCO in year 2015. The SDGs paradigm is essentially a vision towards actualizing sustainability in all aspects of the global society. Education for Sustainable Development (ESD) in retrospect entails teaching and learning to actualize the intended UNESCO 2030 SDGs. This paper explores how teaching and learning of ESD can be improved, by drawing from local context of the South African schooling system. Preservice natural sciences and technology teachers in their 2nd to 4th years of study at a university’s college of education in South Africa were contacted as participants of the study. Using qualitative case study research design, the study drew from the views and experiences of five (5) purposively selected participants from a broader study, aiming to closely understating how ESD is implemented pedagogically in teaching and learning. The inquiry employed questionnaires and a focus group discussion as qualitative data generation tools. A qualitative data analysis of generated data was carried out using content and thematic analysis, underpinned by interpretive paradigm. The result of analyzed data, suggests that ESD pedagogy at the location where this research was conducted is largely influenced by contextual factors. Furthermore, the result of the study shows that there is a critical need to employ/adopt local experiences or occurrences while teaching sustainable development. Certain pedagogical approaches such as the use of videos relative to local context should also be considered in order to achieve a more realistic application. The paper recommends that educational institutions through teaching and learning should implement ESD by drawing on local contexts and problems, thereby foregrounding constructivism, appreciating and fostering students' prior knowledge and lived experiences.

Keywords: context, education for sustainable development, natural sciences and technology preservice teachers, qualitative research, sustainable development goals

Procedia PDF Downloads 165
1390 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers

Authors: Raúl Jiménez Vilches

Abstract:

Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.

Keywords: focus, prosody, public speaking, Spanish learners of English

Procedia PDF Downloads 95
1389 Evidence of a Negativity Bias in the Keywords of Scientific Papers

Authors: Kseniia Zviagintseva, Brett Buttliere

Abstract:

Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.

Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics

Procedia PDF Downloads 184
1388 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method

Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer

Abstract:

This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.

Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper

Procedia PDF Downloads 343
1387 The Use of Videos: Effects on Children's Language and Literacy Skills

Authors: Rahimah Saimin

Abstract:

Previous research has shown that young children can learn from educational television programmes, videos or other technological media. However, the blending of any of these with traditional printed-based text appears to be omitted. Repeated viewing is an important factor in children's ability to comprehend the content or plot. The present study combined videos with traditional printed-based text and required repeated viewing and is original and distinctive. The first study was a pilot study to explore whether the intervention is implementable in ordinary classrooms. The second study explored whether the curricular embedding is important or whether the video with curricular embedding is effective. The third study explored the effect of “dosage”, i.e. whether a longer/ more intense intervention has a proportionately greater effect on outcomes. Both measured outcomes (comprehension, word sounds, and early word recognition) and unmeasured outcomes (engagement to reading traditional printed-based texts or/and multimodal texts) were obtained from this study. Observation indicated degree of engagement in reading. The theoretical framework was multimodality theory combined with Piaget’s and Vygotsky’s learning theories. An experimental design was used with 4-5-year-old children in nursery schools and primary schools. Six links to video clips exploring non-fiction science content were provided to teachers. The first session is whole-class and subsequent sessions small-group. The teacher then engaged the children in dialogue using supplementary materials. About half of each class was selected randomly for pre-post assessments. Two assessments were used the British Picture Vocabulary Scale (BPVSIII) and the York Assessment of Reading for Comprehension (YARC): Early Reading. Different programme fidelity means were deployed- observations, teacher self-reports attendance logs and post-delivery interviews. Data collection is in progress and results will be available shortly. If this multiphase study show effectiveness in one or other application, then teachers will have other tools which they can use to enhance vocabulary, letter knowledge and word reading. This would be a valuable addition to their repertoire.

Keywords: language skills, literacy skills, multimodality, video

Procedia PDF Downloads 334
1386 The Ideal Memory Substitute for Computer Memory Hierarchy

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.

Keywords: cache, memory-hierarchy, memory, registers, storage

Procedia PDF Downloads 158
1385 Deregulation of Thorium for Room Temperature Superconductivity

Authors: Dong Zhao

Abstract:

Abstract—Extensive research on obtaining applicable room temperature superconductors meets the major barrier, and the record Tc of 135 K achieved via cuprate has been idling for decades. Even though, the accomplishment of higher Tc than the cuprate was made through pressurizing certain compounds composed of light elements, such as for the LaH10 and for the metallic hydrogen. Room temperature superconductivity under ambient pressure is still the preferred approach and is believed to be the ultimate solution for many applications. While racing to find the breakthrough method to achieve this room temperature Tc milestone in superconducting research, a report stated a discovery of a possible high-temperature superconductor, i.e., the thorium sulfide ThS. Apparently, ThS’s Tc can be at room temperature or even higher. This is because ThS revealed an unusual property of the ‘coexistence of high electrical conductivity and diamagnetism’. Noticed that this property of coexistence of high electrical conductivity and diamagnetism is in line with superconductors, meaning ThS is also at its superconducting state. Surprisingly, ThS owns the property of superconductivity at least at room temperature and under atmosphere pressure. Further study of the ThS’s electrical and magnetic properties in comparison with thorium di-iodide ThI2 concluded its molecular configuration as [Th4+(e-)2]S. This means the ThS’s cation is composed of a [Th4+(e-)2]2+ cation core. It is noticed that this cation core is built by an oxidation state +4 of thorium atom plus an electron pair on this thorium atom that resulted in an oxidation state +2 of this [Th4+(e-)2]2+ cation core. This special construction of [Th4+(e-)2]2+ cation core may lead to the ThS’s room temperature superconductivity because of this characteristic electron lone pair residing on the thorium atom. Since the study of thorium chemistry was carried out in the period of before 1970s. the exploration about ThS’s possible room temperature superconductivity would require resynthesizing ThS. This re-preparation of ThS will provide the sample and enable professionals to verify the ThS’s room temperature superconductivity. Regrettably, the current regulation prevents almost everyone from getting access to thorium metal or thorium compounds due to the radioactive nature of thorium-232 (Th-232), even though the radioactive level of Th-232 is extremely low with its half-life of 14.05 billion years. Consequently, further confirmation of ThS’s high-temperature superconductivity through experiments will be impossible unless the use of corresponding thorium metal and related thorium compounds can be deregulated. This deregulation would allow researchers to obtain the necessary starting materials for the study of ThS. Hopefully, the confirmation of ThS’s room temperature superconductivity can not only establish a method to obtain applicable superconductors but also to pave the way for fully understanding the mechanism of superconductivity.

Keywords: co-existence of high electrical conductivity and diamagnetism, electron pairing and electron lone pair, room temperature superconductivity, the special molecular configuration of thorium sulfide ThS

Procedia PDF Downloads 44
1384 Applying the Underwriting Technique to Analyze and Mitigate the Credit Risks in Construction Project Management

Authors: Hai Chien Pham, Thi Phuong Anh Vo, Chansik Park

Abstract:

Risks management in construction projects is important to ensure the positive feasibility of the projects in which financial risks are most concerned while construction projects always run on a credit basis. Credit risks, therefore, require unique and technical tools to be well managed. Underwriting technique in credit risks, in its most basic sense, refers to the process of evaluating the risks and the potential exposure of losses. Risks analysis and underwriting are applied as a must in banks and financial institutions who are supporters for constructions projects when required. Recently, construction organizations, especially contractors, have recognized the significant increasing of credit risks which caused negative impacts to project performance and profit of construction firms. Despite the successful application of underwriting in banks and financial institutions for many years, there are few contractors who are applying this technique to analyze and mitigate the credit risks of their potential owners before signing contracts with them for delivering their performed services. Thus, contractors have taken credit risks during project implementation which might be not materialized due to the bankruptcy and/or protracted default made by their owners. With this regard, this study proposes a model using the underwriting technique for contractors to analyze and assess credit risks of their owners before making final decisions for the potential construction contracts. Contractor’s underwriters are able to analyze and evaluate the subjects such as owner, country, sector, payment terms, financial figures and their related concerns of the credit limit requests in details based on reliable information sources, and then input into the proposed model to have the Overall Assessment Score (OAS). The OAS is as a benchmark for the decision makers to grant the proper limits for the project. The proposed underwriting model is validated by 30 subjects in Asia Pacific region within 5 years to achieve their OAS, and then compare output OAS with their own practical performance in order to evaluate the potential of underwriting model for analyzing and assessing credit risks. The results revealed that the underwriting would be a powerful method to assist contractors in making precise decisions. The contribution of this research is to allow the contractors firstly to develop their own credit risk management model for proactively preventing the credit risks of construction projects and continuously improve and enhance the performance of this function during project implementation.

Keywords: underwriting technique, credit risk, risk management, construction project

Procedia PDF Downloads 206
1383 Exploring Legal Liabilities of Mining Companies for Human Rights Abuses: Case Study of Mongolian Mine

Authors: Azzaya Enkhjargal

Abstract:

Context: The mining industry has a long history of human rights abuses, including forced labor, environmental pollution, and displacement of communities. In recent years, there has been growing international pressure to hold mining companies accountable for these abuses. Research Aim: This study explores the legal liabilities of mining companies for human rights abuses. The study specifically examines the case of Erdenet Mining Corporation (EMC), a large mining company in Mongolia that has been accused of human rights abuses. Methodology: The study used a mixed-methods approach, which included a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Findings: The study found that mining companies can be held liable for human rights abuses under a variety of regulatory frameworks, including soft law and self-regulatory instruments in the mining industry, international law, national law, and corporate law. The study also found that there are a number of challenges to holding mining companies accountable for human rights abuses, including the lack of effective enforcement mechanisms and the difficulty of proving causation. Theoretical Importance: The study contributes to the growing body of literature on the legal liabilities of mining companies for human rights abuses. The study also provides insights into the challenges of holding mining companies accountable for human rights abuses. Data Collection: The data for the study was collected through a variety of methods, including a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Analysis Procedures: The data was analyzed using a variety of methods, including content analysis, thematic analysis, and case study analysis. Conclusion: The study concludes that mining companies can be held liable for human rights abuses under a variety of legal and regulatory frameworks. There are positive developments in ensuring greater accountability and protection of affected communities and the environment in countries with a strong economy. Regrettably, access to avenues of redress is reasonably low in less developed countries, where the governments have not implemented a robust mechanism to enforce liability requirements in the mining industry. The study recommends that governments and mining companies take more ambitious steps to enhance corporate accountability.

Keywords: human rights, human rights abuses, ESG, litigation, Erdenet Mining Corporation, corporate social responsibility, soft law, self-regulation, mining industry, parent company liability, sustainability, environment, UN

Procedia PDF Downloads 77
1382 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 124
1381 The Principal-Agent Model with Moral Hazard in the Brazilian Innovation System: The Case of 'Lei do Bem'

Authors: Felippe Clemente, Evaldo Henrique da Silva

Abstract:

The need to adopt some type of industrial policy and innovation in Brazil is a recurring theme in the discussion of public interventions aimed at boosting economic growth. For many years, the country has adopted various policies to change its productive structure in order to increase the participation of sectors that would have the greatest potential to generate innovation and economic growth. Only in the 2000s, tax incentives as a policy to support industrial and technological innovation are being adopted in Brazil as a phenomenon associated with rates of productivity growth and economic development. In this context, in late 2004 and 2005, Brazil reformulated its institutional apparatus for innovation in order to approach the OECD conventions and the Frascati Manual. The Innovation Law (2004) and the 'Lei do Bem' (2005) reduced some institutional barriers to innovation, provided incentives for university-business cooperation, and modified access to tax incentives for innovation. Chapter III of the 'Lei do Bem' (no. 11,196/05) is currently the most comprehensive fiscal incentive to stimulate innovation. It complies with the requirements, which stipulates that the Union should encourage innovation in the company or industry by granting tax incentives. With its introduction, the bureaucratic procedure was simplified by not requiring pre-approval of projects or participation in bidding documents. However, preliminary analysis suggests that this instrument has not yet been able to stimulate the sector diversification of these investments in Brazil, since its benefits are mostly captured by sectors that already developed this activity, thus showing problems with moral hazard. It is necessary, then, to analyze the 'Lei do Bem' to know if there is indeed the need for some change, investigating what changes should be implanted in the Brazilian innovation policy. This work, therefore, shows itself as a first effort to analyze a current national problem, evaluating the effectiveness of the 'Lei do Bem' and suggesting public policies that help and direct the State to the elaboration of legislative laws capable of encouraging agents to follow what they describes. As a preliminary result, it is known that 130 firms used fiscal incentives for innovation in 2006, 320 in 2007 and 552 in 2008. Although this number is on the rise, it is still small, if it is considered that there are around 6 thousand firms that perform Research and Development (R&D) activities in Brazil. Moreover, another obstacle to the 'Lei do Bem' is the percentages of tax incentives provided to companies. These percentages reveal a significant sectoral correlation between R&D expenditures of large companies and R&D expenses of companies that accessed the 'Lei do Bem', reaching a correlation of 95.8% in 2008. With these results, it becomes relevant to investigate the law's ability to stimulate private investments in R&D.

Keywords: brazilian innovation system, moral hazard, R&D, Lei do Bem

Procedia PDF Downloads 334
1380 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 54
1379 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 96
1378 Developing Pedagogy for Argumentation and Teacher Agency: An Educational Design Study in the UK

Authors: Zeynep Guler

Abstract:

Argumentation and the production of scientific arguments are essential components that are necessary for helping students become scientifically literate through engaging them in constructing and critiquing ideas. Incorporating argumentation into science classrooms is challenging and can be a long-term process for both students and teachers. Students have difficulty in engaging tasks that require them to craft arguments, evaluate them to seek weaknesses, and revise them. Teachers also struggle with facilitating argumentation when they have underdeveloped science practices, underdeveloped pedagogical knowledge for argumentation science teaching, or underdeveloped teaching practice with argumentation (or a combination of all three). Thus, there is a need to support teachers in developing pedagogy for science teaching as argumentation, planning and implementing teaching practice for facilitating argumentation and also in becoming more agentic in this regards. Looking specifically at the experience of agency within education, it is arguable that agency is necessary for teachers’ renegotiation of professional purposes and practices in the light of changing educational practices. This study investigated how science teachers develop pedagogy for argumentation both individually and with their colleagues and also how teachers become more agentic (or not) through the active engagement of their contexts-for-action that refer to this as an ecological understanding of agency in order to positively influence or change their practice and their students' engagement with argumentation over two academic years. Through educational design study, this study conducted with three secondary science teachers (key stage 3-year 7 students aged 11-12) in the UK to find out if similar or different patterns of developing pedagogy for argumentation and of becoming more agentic emerge as they engage in planning and implementing a cycle of activities during the practice of teaching science with argumentation. Data from video and audio-recording of classroom practice and open-ended interviews with the science teachers were analysed using content analysis. The findings indicated that all the science teachers perceived strong agency in their opportunities to develop and apply pedagogical practices within the classroom. The teachers were pro-actively shaping their practices and classroom contexts in ways that were over and above the amendments to their pedagogy. They demonstrated some outcomes in developing pedagogy for argumentation and becoming more agentic in their teaching in this regards as a result of the collaboration with their colleagues and researcher; some appeared more agentic than others. The role of the collaboration between their colleagues was seen crucial for the teachers’ practice in the schools: close collaboration and support from other teachers in planning and implementing new educational innovations were seen as crucial for the development of pedagogy and becoming more agentic in practice. They needed to understand the importance of scientific argumentation but also understand how it can be planned and integrated into classroom practice. They also perceived constraint emerged from their lack of competence and knowledge in posing appropriate questions to help the students engage in argumentation, providing support for the students' construction of oral and written arguments.

Keywords: argumentation, teacher professional development, teacher agency, students' construction of argument

Procedia PDF Downloads 128
1377 Modernization of Translation Studies Curriculum at Higher Education Level in Armenia

Authors: A. Vahanyan

Abstract:

The paper touches upon the problem of revision and modernization of the current curriculum on translation studies at the Armenian Higher Education Institutions (HEIs). In the contemporary world where quality and speed of services provided are mostly valued, certain higher education centers in Armenia though do not demonstrate enough flexibility in terms of the revision and amendment of courses taught. This issue is present for various curricula at the university level and Translation Studies related curriculum, in particular. Technological innovations that are of great help for translators have been long ago smoothly implemented into the global Translation Industry. According to the European Master's in Translation (EMT) framework, translation service provision comprises linguistic, intercultural, information mining, thematic, and technological competencies. Therefore, to form the competencies mentioned above, the curriculum should be seriously restructured to meet the modern education and job market requirements, relevant courses should be proposed. New courses, in particular, should focus on the formation of technological competences. These suggestions have been made upon the author’s research of the problem across various HEIs in Armenia. The updated curricula should include courses aimed at familiarization with various computer-assisted translation (CAT) tools (MemoQ, Trados, OmegaT, Wordfast, etc.) in the translation process, creation of glossaries and termbases compatible with different platforms), which will ensure consistency in translation of similar texts and speeding up the translation process itself. Another aspect that may be strengthened via curriculum modification is the introduction of interdisciplinary and Project-Based Learning courses, which will enable info mining and thematic competences, which are of great importance as well. Of course, the amendment of the existing curriculum with the mentioned courses will require corresponding faculty development via training, workshops, and seminars. Finally, the provision of extensive internship with translation agencies is strongly recommended as it will ensure the synthesis of theoretical background and practical skills highly required for the specific area. Summing up, restructuring and modernization of the existing curricula on Translation Studies should focus on three major aspects, i.e., introduction of new courses that meet the global quality standards of education, professional development for faculty, and integration of extensive internship supervised by experts in the field.

Keywords: competencies, curriculum, modernization, technical literacy, translation studies

Procedia PDF Downloads 128
1376 Molecular Characterization of Arginine Sensing Response in Unravelling Host-Pathogen Interactions in Leishmania

Authors: Evanka Madan, Madhu Puri, Dan Zilberstein, Rohini Muthuswami, Rentala Madhubala

Abstract:

The extensive interaction between the host and pathogen metabolic networks decidedly shapes the outcome of infection. Utilization of arginine by the host and pathogen is critical for determining the outcome of pathogenic infection. Infections with L. donovani, an intracellular parasite, will lead to an extensive competition of arginine between the host and the parasite donovani infection. One of the major amino acid (AA) sensing signaling pathways in mammalian cells are the mammalian target of rapamycin complex I (mTORC1) pathway. mTORC1, as a sensor of nutrient, controls numerous metabolic pathways. Arginine is critical for mTORC1 activation. SLC38A9 is the arginine sensor for the mTORC1, being activated during arginine sufficiency. L. donovani transport arginine via a high-affinity transporter (LdAAP3) that is rapidly up-regulated by arginine deficiency response (ADR) in intracellular amastigotes. This study, to author’s best knowledge, investigates the interaction between two arginine sensing systems that act in the same compartment, the lysosome. One is important for macrophage defense, and the other is essential for pathogen virulence. We hypothesize that the latter modulates lysosome arginine to prevent host defense response. The work presented here identifies an upstream regulatory role of LdAAP3 in regulating the expression of SLC38A9-mTORC1 pathway, and consequently, their function in L. donovani infected THP-1 cells cultured in 0.1 mM and 1.5 mM arginine. It was found that in physiological levels of arginine (0.1 mM), infecting THP-1 with Leishmania leads to increased levels of SLC38A9 and mTORC1 via an increase in the expression of RagA. However, the reversal was observed with LdAAP3 mutants, reflecting the positive regulatory role of LdAAP3 on the host SLC38A9. At the molecular level, upon infection, mTORC1 and RagA were found to be activated at the surface of phagolysosomes which was found to form a complex with phagolysosomal localized SLC38A9. To reveal the relevance of SLC38A9 under physiological levels of arginine, endogenous SLC38A9 was depleted and a substantial reduction in the expression of host mTORC1, its downstream active substrate, p-P70S6K1 and parasite LdAAP3, was observed, thereby showing that silencing SLC38A9 suppresses ADR. In brief, to author’s best knowledge, these results reveal an upstream regulatory role of LdAAP3 in manipulating SLC38A9 arginine sensing in host macrophages. Our study indicates that intra-macrophage survival of L. donovani depends on the availability and transport of extracellular arginine. An understanding of the sensing pathway of both parasite and host will open a new perspective on the molecular mechanism of host-parasite interaction and consequently, as a treatment for Leishmaniasis.

Keywords: arginine sensing, LdAAP3, L. donovani, mTORC1, SLC38A9, THP-1

Procedia PDF Downloads 122
1375 Microplastics in the Seine River Catchment: Results and Lessons from a Pluriannual Research Programme

Authors: Bruno Tassin, Robin Treilles, Cleo Stratmann, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Rachid Dris, Johnny Gasperi

Abstract:

Microplastics (<5mm) in the environment and in hydro systems is one of the major present environmental issues. Over the last five years a research programme was conducted in order to assess the behavior of microplastics in the Seine river catchment, in a Man-Land-Sea continuum approach. Results show that microplastic concentration varies at the seasonal scale, but also at much smaller scales, during flood events and with tides in the estuary for instance. Moreover, microplastic sampling and characterization issues emerged throughout this work. The Seine river is a 750km long river flowing in Northwestern France. It crosses the Paris megacity (12 millions inhabitants) and reaches the English Channel after a 170 km long estuary. This site is a very relevant one to assess the effect of anthropogenic pollution as the mean river flow is low (mean flow around 350m³/s) while the human presence and activities are very intense. Monthly monitoring of the microplastic concentration took place over a 19-month period and showed significant temporal variations at all sampling stations but no significant upstream-downstream increase, indicating a possible major sink to the sediment. At the scale of a major flood event (winter and spring 2018), microplastic concentration shows an evolution similar to the well-known suspended solids concentration, with an increase during the increase of the flow and a decrease during the decrease of the flow. Assessing the position of the concentration peak in relation to the flow peak was unfortunately impossible. In the estuary, concentrations vary with time in connection with tides movements and in the water column in relation to the salinity and the turbidity. Although major gains of knowledge on the microplastic dynamics in the Seine river have been obtained over the last years, major gaps remain to deal mostly with the interaction with the dynamics of the suspended solids, the selling processes in the water column and the resuspension by navigation or shear stress increase. Moreover, the development of efficient chemical characterization techniques during the 5 year period of this pluriannual research programme led to the improvement of the sampling techniques in order to access smaller microplastics (>10µm) as well as larger but rare ones (>500µm).

Keywords: microplastics, Paris megacity, seine river, suspended solids

Procedia PDF Downloads 197
1374 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 163
1373 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 71
1372 The Relationship Between Weight Gain, Cyclicality of Diabetologic Education and the Experienced Stress: A Study Involving Pregnant Women

Authors: Agnieszka Rolinska, Marta Makara-Studzinska

Abstract:

Introduction: In recent years, there has been an intensive development of research into the physiological relationships between the experienced stress and obesity. Moreover, strong chronic stress leads to the disorganization of a person’s activeness on various levels of functioning, including the behavioral and cognitive sphere (also in one’s diet). Aim: The present work addresses the following research questions: Is there a relationship between an increase in stress related to the disease and the need for the cyclicality of diabetologic education in gestational diabetes? Are there any differences in terms of the experienced stress during the last three months of pregnancy in women with gestational diabetes and in normal pregnancy between the patients with normal weight gains and those with abnormal weight gains? Are there any differences in terms of stress coping styles in women with gestational diabetes and in normal pregnancy between the patients with normal weight gains and those with abnormal weight gains? Method: The study involved pregnant women with gestational diabetes (treated with diet, without insulin therapy) and in normal pregnancy – 206 women in total. The following psychometric tools were employed: Perceived Stress Scale (PSS; Cohen, Kamarck, Mermelstein), Coping Inventory for Stressful Situations (CISS; Endler, Parker) and authors’ own questionnaire. Gestational diabetes mellitus was diagnosed on the basis of the results of fasting oral glucose tolerance test (75 g OGTT). Body weight measurements were confirmed in a diagnostic interview, taking into account medical data. Regularities in weight gains in pregnancy were determined according to the recommendations of the Polish Gynecological Society and American norms determined by the Institute of Medicine (IOM). Conclusions: An increase in stress related to the disease varies in patients with differing requirements for the cyclical nature of diabetologic education (i.e. education which is systematically repeated). There are no differences in terms of recently experienced stress and stress coping styles between women with gestational diabetes and those in normal pregnancy. There is a relationship between weight gains in pregnancy and the stress experienced in life as well as stress coping styles – both in pregnancy complicated by diabetes and in physiological pregnancy. In the discussion of the obtained results, the authors refer to scientific reports from English-language magazines of international range.

Keywords: diabetologic education, gestational diabetes, stress, weight gain in pregnancy

Procedia PDF Downloads 307
1371 Studying Second Language Learners' Language Behavior from Conversation Analysis Perspective

Authors: Yanyan Wang

Abstract:

This paper on second language teaching and learning uses conversation analysis (CA) approach and focuses on how second language learners of Chinese do repair when making clarification requests. In order to demonstrate their behavior in interaction, a comparison was made to study the differences between native speakers of Chinese with non-native speakers of Chinese. The significance of the research is to make second language teachers and learners aware of repair and how to seek clarification. Utilizing the methodology of CA, the research involved two sets of naturally occurring recordings, one of native speaker students and the other of non-native speaker students. Both sets of recording were telephone talks between students and teachers. There were 50 native speaker students and 50 non-native speaker students. From multiple listening to the recordings, the parts with repairs for clarification were selected for analysis which included the moments in the talk when students had problems in understanding or hearing the speaker and had to seek clarification. For example, ‘Sorry, I do not understand ‘and ‘Can you repeat the question? ‘were the parts as repair to make clarification requests. In the data, there were 43 such cases from native speaker students and 88 cases from non-native speaker students. The non-native speaker students were more likely to use repair to seek clarification. Analysis on how the students make clarification requests during their conversation was carried out by investigating how the students initiated problems and how the teachers repaired the problems. In CA term, it is called other-initiated self-repair (OISR), which refers to student-initiated teacher-repair in this research. The findings show that, in initiating repair, native speaker students pay more attention to mutual understanding (inter-subjectivity) while non-native speaker students, due to their lack of language proficiency, pay more attention to their status of knowledge (epistemic) switch. There are three major differences: 1, native Chinese students more often initiate closed-class OISR (seeking specific information in the request) such as repeating a word or phrases from the previous turn while non-native students more frequently initiate open-class OISR (not specifying clarification) such as ‘sorry, I don’t understand ‘. 2, native speakers’ clarification requests are treated by the teacher as understanding of the content while non-native learners’ clarification requests are treated by teacher as language proficiency problem. 3, native speakers don’t see repair as knowledge issue and there is no third position in the repair sequences to close repair while non-native learners take repair sequence as a time to adjust their knowledge. There is clear closing third position token such as ‘oh ‘ to close repair sequence so that the topic can go back. In conclusion, this paper uses conversation analysis approach to compare differences between native Chinese speakers and non-native Chinese learners in their ways of conducting repair when making clarification requests. The findings are useful in future Chinese language teaching and learning, especially in teaching pragmatics such as requests.

Keywords: conversation analysis (CA), clarification request, second language (L2), teaching implication

Procedia PDF Downloads 252
1370 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon

Authors: Allaw Kamel, Bazzi Hasan

Abstract:

Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.

Keywords: sustainable development, landfill, municipal solid waste (MSW), geographic information system (GIS), multi criteria decision analysis (MCDA), environmentally sensitive area (ESA)

Procedia PDF Downloads 148
1369 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes

Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil

Abstract:

Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.

Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles

Procedia PDF Downloads 288
1368 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment

Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu

Abstract:

Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.

Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer

Procedia PDF Downloads 105
1367 Characterisation of Human Attitudes in Software Requirements Elicitation

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana

Abstract:

It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.

Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering

Procedia PDF Downloads 262
1366 Analysis of Determinants of Growth of Small and Medium Enterprises in Kwara State, Nigeria

Authors: Hussaini Tunde Subairu

Abstract:

Small and Medium Enterprises (SMEs) sectors serve as catalyst for employment generation, national growth, poverty reduction and economic development in developing and developed countries. However, in Nigeria despite copious and plethora of government policies and stimulus schemes directed at SMEs, the sector is still characterized by high rate of failure and discontinuities. This study therefore investigated owners/managers profile, firms characteristics and external factors as possible determinants of SMEs growth from selected SMEs in Kwara State. Primary data were sourced from 200 SMEs respondents registered with the National Association of Small and Medium Enterprises (NASMES) in Kwara State Central Senatorial District. Multiple Regressions Analysis (MRA) was used to analyze the relationship between dependent and independent variables, and pair wise correlation was employed to examine the relationship among independent variables. The Analysis of Variable (ANOVA) was employed to indicate the overall significant of the model The findings revealed that Analysis of variance (ANOVA) put the value of F-statistics at 420.45 and p-value at 0.000 was significant. The values of R2 and Adjusted R2 of 0.9643 and 0.9620 respectively suggested that 96 percent of variations in employment growth were explained by the explanatory variables. The level of technical and managerial education has t- value of 24.14 and p-value of 0.001, length of managers/owners experience in similar trade with t- value of 21.37 and p-value of 0.001, age of managers/owners with t- value of 42.98 and p-value of 0.001, firm age with t- value of 25.91 and p-value of 0.001, numbers of firms in a cluster with t- value of 7.20 and p-value of 0.001, access to formal finance with t-value of 5.56 and p-value of 0.001, firm technology innovation with t- value of 25.32 and p-value of 0.01, institutional support with t- value of 18.89 and p-value of 0.01, globalization with t- value of 9.78 and p-value of 0.01, and infrastructure with t-value of 10.75 and p-value of 0.01. The result also indicated that initial size has t-value of -1.71 and p-value of 0.090 which is consistent with Gibrat’s Law. The study concluded that owners/managers profile, firm specific characteristics and external factors substantially influenced employment growths of SMEs in the study area. Therefore, policy implication should enhance human capital development of SMEs owners/managers, and strengthen fiscal policy thrust through imposition on tariff regime to minimize effect of globalization. Governments at all level must support SMEs growth radically and enhance institutional support for SMEs growth and radically and significantly upgrading key infrastructure as rail/roads, rail, telecommunications, water and power.

Keywords: external factors, firm specific characteristics, owners / manager profile, small and medium enterprises

Procedia PDF Downloads 240