Search results for: nanodrop technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7711

Search results for: nanodrop technology

601 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
600 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 69
599 The Relationship between Osteoporosis-Related Knowledge and Physical Activity among Women Age over 50 Years

Authors: P. Tardi, B. Szilagyi, A. Makai, P. Acs, M. Hock, M. Jaromi

Abstract:

Osteoporosis is becoming a major public health problem, particularly in postmenopausal women, as the incidence of this disease is getting higher. Nowadays, one of the most common chronic musculoskeletal diseases is osteoporosis. Osteoporosis-related knowledge is an important contributor to prevent or to treat osteoporosis. The most important strategies to prevent or treat the disease are increasing the level of physical activity at all ages, cessation of smoking, reduction of alcohol consumption, adequate dietary calcium, and vitamin D intake. The aim of the study was to measure the osteoporosis-related knowledge and physical activity among women age over 50 years. For the measurements, we used the osteoporosis questionnaire (OPQ) to examine the disease-specific knowledge and the global physical activity questionnaire (GPAQ) to measure the quantity and quality of the physical activity. The OPQ is a self-administered 20-item questionnaire with five categories: general information, risk factors, investigations, consequences, and treatment. There are four choices per question (one of them is the 'I do not know'). The filler gets +1 for a good answer, -1 point for a bad answer, and 0 for 'I do not know' answer. We contacted with 326 women (63.08 ± 9.36 year) to fill out the questionnaires. Descriptive analysis was carried out, and we calculated Spearman's correlation coefficient to examine the relationship between the variables. Data were entered into Microsoft Excel, and all statistical analyses were performed using SPSS (Version 24). The participants of the study (n=326) reached 8.76 ± 6.94 points on OPQ. Significant (p < 0.001) differences were found in the results of OPQ according to the highest level of education. It was observed that the score of the participants with osteoporosis (10.07 ± 6.82 points) was significantly (p=0.003) higher than participants without osteoporosis (9.38 ± 6.66 points) and the score of those women (6.49 ± 6.97 points) who did not know that osteoporosis exists in their case. The GPAQ results showed the sample physical activity in the dimensions of vigorous work (479.86 ± 684.02 min/week); moderate work (678.16 ± 804.5 min/week); travel (262.83 ± 380.27 min/week); vigorous recreation (77.71 ± 123.46 min/week); moderate recreation (115.15 ± 154.82 min/week) and total weekly physical activity (1645.99 ± 1432.88 min/week). Significant correlations were found between the osteoporosis-related knowledge and the physical activity in travel (R=0.21; p < 0.001), vigorous recreation (R=0.35; p < 0.001), moderate recreation (R=0.35; p < 0.001), total vigorous minutes/week (R=0.15; p=0.001) and total moderate minutes/week (R=0.13; p=0.04) dimensions. According to the results that were achieved, the highest level of education significantly determines osteoporosis-related knowledge. Physical activity is an important contributor to prevent or to treat osteoporosis, and it showed a significant correlation with osteoporosis-related knowledge. Based on the results, the development of osteoporosis-related knowledge may help to improve the level of physical activity, especially recreation. Acknowledgment: Supported by the ÚNKP-20-1 New National Excellence Program of The Ministry for Innovation and Technology from the Source of the National Research, Development and Innovation Fund.

Keywords: osteoporosis, osteoporosis-related knowledge, physical activity, prevention

Procedia PDF Downloads 112
598 Exploring the Use of Schoolgrounds for the Integration of Environmental and Sustainability Education in Natural and Social Sciences Pedagogy: A Case Study

Authors: Headman Hebe, Arnold Taringa

Abstract:

Background of the study: The benefits derived from Environmental and Sustainability Education (ESE) go beyond obtaining knowledge about the environment and the impact of human beings on the environment. Hence, it is sensible to expose learners to various resources that could enable meaningful environment-inclined pedagogy. The schoolgrounds, where they are utilised to promote ESE, benefit holistic learner development. However, empirical evidence, globally, suggests that young children’s contact with nature is declining due to urbanization, safety concerns by parents/guardians, and greater dependency on technology. Modern children spend much time on videogames and social media with very little time in the natural environment. Furthermore, national education departments in numerous countries have made tangible efforts to embed environmental and place-based learning to their school curricula. South Africa is one of those countries whose national school education curriculum advocates for ESE in pedagogy. Nevertheless, there is paucity of research conducted in South Africa on schoolgrounds as potential enablers of ESE and tools to foster a connection between youngsters and the natural environment. Accordingly, this study was essential as it seeks to determine the extent to which environmental learning is accommodated in pedagogy. Significantly, it investigates efforts made to use schoolgrounds for pedagogical purposes to connect children with the natural environment. Therefore, this study was conducted to investigate the accessibility and use of schoolgrounds for environment-inclined pedagogy in Natural and Social Sciences in two schools located in the Mpumalanga Province of South Africa. It tries to answer the question: To what extent are schoolgrounds used to promote environmental and sustainability education in the selected schools?The sub-questions: How do teachers and learners perceive the use of schoolgrounds for environmental and sustainability education activities? How does the organization of schoolgrounds offer opportunities for environmental education activities and accessibility for learners? Research method: This qualitative–interpretive case study used purposive and convenient sampling for participant selection. Forty-six respondents: 40 learners (twenty grade 7 learners per school), 2 school principals and 4 grade 7 participated in this study. Data collection tools were observations, interviews, audio-visual recordings and questionnaires while data analysis was done thematically. Major findings: The findings of the study point to: The lack of teacher training and infrastructure in the schoolgrounds and, no administrative support. Unclear curriculum guidelines on the use of schoolgrounds for ESE. The availability various elements in the schoolgrounds that could aid ESE activities. Learners denied access to certain parts of the schoolgrounds. Lack of time and curriculum demands constrain teachers from using schoolgrounds.

Keywords: affordances, environment and sustainability education, experiential learning, schoolgrounds

Procedia PDF Downloads 64
597 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell

Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman

Abstract:

Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.

Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve

Procedia PDF Downloads 557
596 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks

Authors: Daniela Pellin, Wilson Engelmann

Abstract:

Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.

Keywords: development, governance of law, institutionalization, triple helix

Procedia PDF Downloads 140
595 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 76
594 The Effect of Technology on Skin Development and Progress

Authors: Haidy Weliam Megaly Gouda

Abstract:

Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.

Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification

Procedia PDF Downloads 62
593 IN-SEAN: The Pace of Economic Cooperation between India and ASEAN

Authors: Eumsin Payan

Abstract:

The article desires the Association of Southeast Asian Nations (ASEAN) to take interest in the policies and give importance to India over other powerful countries in the World, including powerful countries in Asia, comprising of: People’s Republic of China (PRC), Russia, and India countries with the ability to drive the Asian continent, specifically, the ASEAN Economic Community (AEC). (Japan was incapable of stepping up to become the leader of ASEAN due to the fact that Japan has created “wounds” from military history with too many countries in Asia, including wounds from the Greater East Asia War, combining with economic problems Japan is currently facing and also several natural disasters, therefore Japan is not considered a good option of our era.) China appears to be an option that stands out, which could be seen through countless published articles in the general public. However, this article desires to propose India as an option to develop and drive the relationship between ASEAN countries in the future development of Computer Science Technology and allow India to be the leader in driving the Asian Economy in place of China and the United States. As for Russia, its location is distant and apart from South East Asia. Moreover, Russia does not give as much importance to ASEAN. In this light, the author perceives that India already has the “Look East” policy. Therefore, it would be simple for ASEAN to look back at India by simply starting cooperation through policies related to collaboration in the areas of computer science. In effect, this will continuously adjust and improve the relationship towards cooperation in the areas of economics, society, and culture. Referring to the above, the author suggests a word that could be used to call the relationship between India and ASEAN, INSEAN or IN-SEAN. Hereinafter, the author hopes that Thailand, in the position of one in the five founders of ASEAN, could become the leader or be the entity that pushes forward the ASEAN policies that will increase the importance of looking towards India. India is an emerging giant that has the ability to step up in Asia. With the proficient use of English, India is able to pass on the knowledge and drive the ASEAN’s Economic relationship better than China or Russia, as faced with higher language barriers. Moreover, India has cultivated democratic civilization from the colonization of the British Empire, similar to other nations of Southeast Asia, which are familiar with various heritage cultures that the British has brought them. The most important aspect in the author’s perspective is the fact that India is not aggressive and that they have courtesy. Through developing policies of the East through the “Look East” policy, it enabled India to establish a more smooth relationship with Asian countries comparing to China. China has imposed harsh policies towards democracy to the land above the South China Sea, which directly affect the ASEAN countries. From the above reasons, India, therefore, is an appropriate option in the establishment of a closer relationship with ASEAN, as the author has proposed relationship as INSEAN or IN-SEAN.

Keywords: IN-SEAN, INSEAN, look west policy, look east policy, ASEAN, India

Procedia PDF Downloads 646
592 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
591 Problems and Prospects of Protection of Historical Building as a Corner Stone of Cultural Policy for International Collaboration in New Era: A Study of Fars Province, Iran

Authors: Kiyanoush Ghalavand, Ali Ferydooni

Abstract:

Fars province Fārs or Pārs is a vast land located in the southwest of Iran. All over the province, you can see and feel the glory of Ancient Iranian culture and civilization. There are many monuments from pre-historical to the Islamic era within this province. The existence of ancient cultural and historical monuments in Fars province including the historical complex of Persepolis, the tombs of Persian poets Hafez and Saadi, and dozens of other valuable cultural and historical works of this province as a symbol of Iranian national identity and the manifestation of transcendent cultural values of this national identity. Fars province is quintessentially Persian. Its name is the modern version of ancient Parsa, the homeland, if not the place of origin, of the Persians, one of the great powers of antiquity. From here, the Persian Empire ruled much of Western and Central Asia, receiving ambassadors and messengers at Persepolis. It was here that the Persian kings were buried, both in the mountain behind Persepolis and in the rock face of nearby Naqsh-e Rustam. We have a complex paradox in Persian and Islamic ideology in the age of technology in Iran. The main purpose of the present article is to identify and describe the problems and prospects of origin and development of the modern approach to the conservation and restoration of ancient monuments and historic buildings, the influence that this development has had on international collaboration in the protection and conservation of cultural heritage, and the present consequences worldwide. The definition of objects and structures of the past as heritage, and the policies related to their protection, restoration, and conservation, have evolved together with modernity, and are currently recognized as an essential part of the responsibilities of modern society. Since the eighteenth century, the goal of this protection has been defined as the cultural heritage of humanity; gradually this has included not only ancient monuments and past works of art but even entire territories for a variety of new values generated in recent decades. In its medium-term program of 1989, UNESCO defined the full scope of such heritage. The cultural heritage may be defined as the entire corpus of material signs either artistic or symbolic handed on by the past to each culture and, therefore, to the whole of humankind. As a constituent part of the affirmation and enrichment of cultural identities, as a legacy belonging to all humankind, the cultural heritage gives each particular place its recognizable features and is the storehouse of human experience. The preservation and the presentation of the cultural heritage are therefore a corner-stone of any cultural policy. The process, from which these concepts and policies have emerged, has been identified as the ‘modern conservation movement’.

Keywords: tradition, modern, heritage, historical building, protection, cultural policy, fars province

Procedia PDF Downloads 163
590 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 198
589 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers

Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei

Abstract:

Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.

Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce

Procedia PDF Downloads 265
588 A Standard-Based Competency Evaluation Scale for Preparing Qualified Adapted Physical Education Teachers

Authors: Jiabei Zhang

Abstract:

Although adapted physical education (APE) teacher preparation programs are available in the nation, a consistent standards-based competency evaluation scale for preparing of qualified personnel for teaching children with disabilities in APE cannot be identified in the literature. The purpose of this study was to develop a standard-based competency evaluation scale for assessing qualifications for teaching children with disabilities in APE. Standard-based competencies were reviewed and identified based on research evidence documented as effective in teaching children with disabilities in APE. A standard-based competency scale was developed for assessing qualifications for teaching children with disabilities in APE. This scale included 20 standard-based competencies and a 4-point Likert-type scale for each standard-based competency. The first standard-based competency is knowledgeable of the causes of disabilities and their effects. The second competency is the ability to assess physical education skills of children with disabilities. The third competency is able to collaborate with other personnel. The fourth competency is knowledgeable of the measurement and evaluation. The fifth competency is to understand federal and state laws. The sixth competency is knowledgeable of the unique characteristics of all learners. The seventh competency is the ability to write in behavioral terms for objectives. The eighth competency is knowledgeable of developmental characteristics. The ninth competency is knowledgeable of normal and abnormal motor behaviors. The tenth competency is the ability to analyze and adapt the physical education curriculums. The eleventh competency is to understand the history and the philosophy of physical education. The twelfth competency is to understand curriculum theory and development. The thirteenth competency is the ability to utilize instructional designs and plans. The fourteenth competency is the ability to create and implement physical activities. The fifteenth competency is the ability to utilize technology applications. The sixteenth competency is to understand the value of program evaluation. The seventeenth competency is to understand professional standards. The eighteenth competency is knowledgeable of the focused instruction and individualized interventions. The nineteenth competency is able to complete a research project independently. The twentieth competency is to teach children with disabilities in APE independently. The 4-point Likert-type scale ranges from 1 for incompetent to 4 for highly competent. This scale is used for assessing if one completing all course works is eligible for receiving an endorsement for teaching children with disabilities in APE, which is completed based on the grades earned on three courses targeted for each standard-based competency. A mean grade received in three courses primarily addressing a standard-based competency will be marked on a competency level in the above scale. The level 4 is marked for a mean grade of A one receives over three courses, the level 3 for a mean grade of B over three courses, and so on. One should receive a mean score of 3 (competent level) or higher (highly competent) across 19 standard-based competencies after completing all courses specified for receiving an endorsement for teaching children with disabilities in APE. The validity, reliability, and objectivity of this standard-based competency evaluation scale are to be documented.

Keywords: evaluation scale, teacher preparation, adapted physical education teachers, and children with disabilities

Procedia PDF Downloads 116
587 Biocompatibility Tests for Chronic Application of Sieve-Type Neural Electrodes in Rats

Authors: Jeong-Hyun Hong, Wonsuk Choi, Hyungdal Park, Jinseok Kim, Junesun Kim

Abstract:

Identifying the chronic functions of an implanted neural electrode is an important factor in acquiring neural signals through the electrode or restoring the nerve functions after peripheral nerve injury. The purpose of this study was to investigate the biocompatibility of the chronic implanted neural electrode into the sciatic nerve. To do this, a sieve-type neural electrode was implanted at proximal and distal ends of a transected sciatic nerve as an experimental group (Sieve group, n=6), and the end-to-end epineural repair was operated with the cut sciatic nerve as a control group (reconstruction group, n=6). All surgeries were performed on the sciatic nerve of the right leg in Sprague Dawley rats. Behavioral tests were performed before and 1, 4, 7, 10, 14, and weekly days until 5 months following surgery. Changes in sensory function were assessed by measuring paw withdrawal responses to mechanical and cold stimuli. Motor function was assessed by motion analysis using a Qualisys program, which showed a range of motion (ROM) related to the joints. Neurofilament-heavy chain and fibronectin expression were detected 5 months after surgery. In both groups, the paw withdrawal response to mechanical stimuli was slightly decreased from 3 weeks after surgery and then significantly decreased at 6 weeks after surgery. The paw withdrawal response to cold stimuli was increased from 4 days following surgery in both groups and began to decrease from 6 weeks after surgery. The ROM of the ankle joint was showed a similar pattern in both groups. There was significantly increased from 1 day after surgery and then decreased from 4 days after surgery. Neurofilament-heavy chain expression was observed throughout the entire sciatic nerve tissues in both groups. Especially, the sieve group was showed several neurofilaments that passed through the channels of the sieve-type neural electrode. In the reconstruction group, however, a suture line was seen through neurofilament-heavy chain expression up to 5 months following surgery. In the reconstruction group, fibronectin was detected throughout the sciatic nerve. However, in the sieve group, the fibronectin was observed only in the surrounding nervous tissues of an implanted neural electrode. The present results demonstrated that the implanted sieve-type neural electrode induced a focal inflammatory response. However, the chronic implanted sieve-type neural electrodes did not cause any further inflammatory response following peripheral nerve injury, suggesting the possibility of the chronic application of the sieve-type neural electrodes. This work was supported by the Basic Science Research Program funded by the Ministry of Science (2016R1D1A1B03933986), and by the convergence technology development program for bionic arm (2017M3C1B2085303).

Keywords: biocompatibility, motor functions, neural electrodes, peripheral nerve injury, sensory functions

Procedia PDF Downloads 150
586 Ozonation as an Effective Method to Remove Pharmaceuticals from Biologically Treated Wastewater of Different Origin

Authors: Agne Jucyte Cicine, Vytautas Abromaitis, Zita Rasuole Gasiunaite, I. Vybernaite-Lubiene, D. Overlinge, K. Vilke

Abstract:

Pharmaceutical pollution in aquatic environments has become a growing concern. Various active pharmaceutical ingredient (API) residues, hormones, antibiotics, or/and psychiatric drugs, have already been discovered in different environmental compartments. Due to existing ineffective wastewater treatment technologies to remove APIs, an underestimated amount can enter the ecosystem by discharged treated wastewater. Especially, psychiatric compounds, such as carbamazepine (CBZ) and venlafaxine (VNX), persist in effluent even post-treatment. Therefore, these pharmaceuticals usually exceed safe environmental levels and pose risks to the aquatic environment, particularly to sensitive ecosystems such as the Baltic Sea. CBZ, known for its chemical stability and long biodegradation time, accumulates in the environment, threatening aquatic life and human health through the food chain. As the use of medication rises, there is an urgent need for advanced wastewater treatment to reduce pharmaceutical contamination and meet future regulatory requirements. In this study, we tested advanced oxidation technology using ozone to remove two commonly used psychiatric drugs (carbamazepine and venlafaxine) from biologically treated wastewater effluent. Additionally, general water quality parameters (suspended matter (SPM), dissolved organic carbon (DOC), chemical oxygen demand (COD), and bacterial presence were analyzed. Three wastewater treatment plants (WWTPs) representing different anthropogenic pressures were selected: 1) resort, 2) resort and residential, and 3) residential, industrial, and resort. Wastewater samples for the experiment were collected during the summer season after mechanical and biological treatment and ozonated for 5, 10, and 15 minutes. The initial dissolved ozone concentration of 7,3±0,7 mg/L was held constant during all the experiments. Pharmaceutical levels in this study exceeded the predicted no-effect concentration (PNEC) of 500 and 90 ng L⁻¹ for CBZ and VNX, respectively, in all WWTPs, except CBZ in WWTP 1. Initial CBZ contamination was found to be lower in WWTP 1 (427.4 ng L⁻¹), compared with WWTP 2 (1266.5 ng L⁻¹) and 3 (119.2 ng L⁻¹). VNX followed a similar trend with concentrations of 341.2 ng L⁻¹, 361.4 ng L⁻¹, and 390.0 ng L⁻¹, respectively, for WWTPs 1, 2, and 3. It was determined that CBZ was not detected in the effluent after 5 minutes of ozonation in any of the WWTPs. Contrarily, VNX was still detected after 5, 10, and 15 minutes of treatment with ozone, however, under the limits of quantification (LOD) (<5ng L⁻¹). Additionally, general pollution of SPM, DOC, COD, and bacterial contamination was reduced notably after 5 minutes of treatment with ozone, while no bacterial growth was obtained. Although initial pharmaceutical levels exceeded PNECs, indicating ongoing environmental risks, ozonation demonstrated high efficiency in reducing pharmaceutical and general contamination in wastewater with different pollution matrices.

Keywords: Baltic Sea, ozonation, pharmaceuticals, wastewater treatment plants

Procedia PDF Downloads 19
585 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: mortality map, spatial patterns, statistical area, variation

Procedia PDF Downloads 258
584 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 257
583 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 188
582 Agricultural Education and Research in India: Challenges and Way Forward

Authors: Kiran Kumar Gellaboina, Padmaja Kaja

Abstract:

Agricultural Education and Research in India needs a transformation to serve the needs of the farmers and that of the nation. The fact that Agriculture and allied activities act as main source of livelihood for more than 70% population of rural India reinforces its importance in administrative and policy arena. As per Census 2011 of India it provides employment to approximately 56.6 % of labour. India has achieved significant growth in agriculture, milk, fish, oilseeds and fruits and vegetables owing to green, white, blue and yellow revolutions which have brought prosperity to farmers. Many factors are responsible for these achievement viz conducive government policies, receptivity of the farmers and also establishment of higher agricultural education institutions. The new breed of skilled human resources were instrumental in generating new technologies, and in its assessment, refinement and finally its dissemination to the farming community through extension methods. In order to sustain, diversify and realize the potential of agriculture sectors, it is necessary to develop skilled human resources. Agricultural human resource development is a continuous process undertaken by agricultural universities. The Department of Agricultural Research and Education (DARE) coordinates and promotes agricultural research & education in India. In India, agricultural universities were established on ‘land grant’ pattern of USA which helped incorporation of a number of diverse subjects in the courses as also provision of hands-on practical exposure to the student. The State Agricultural Universities (SAUs) established through the legislative acts of the respective states and with major financial support from them leading to administrative and policy controls. It has been observed that pace and quality of technology generation and human resource development in many of the SAUs has gone down. The reason for this slackening are inadequate state funding, reduced faculty strength, inadequate faculty development programmes, lack of modern infrastructure for education and research etc. Establishment of new state agricultural universities and new faculties/colleges without providing necessary financial and faculty support has aggrieved the problem. The present work highlights some of the key issues affecting agricultural education and research in India and the impact it would have on farm productivity and sustainability. Secondary data pertaining to budgetary spend on agricultural education and research will be analyzed. This paper will study the trends in public spending on agricultural education and research and the per capita income of farmers in India. This paper tries to suggest that agricultural education and research has a key role in equipping the human resources for enhanced agricultural productivity and sustainable use of natural resources. Further, a total re-orientation of agricultural education with emphasis on other agricultural related social sciences is needed for effective agricultural policy research.

Keywords: agriculture, challenges, education, research

Procedia PDF Downloads 232
581 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 422
580 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 73
579 Information Seeking and Evaluation Tasks to Enhance Multiliteracies in Health Education

Authors: Tuula Nygard

Abstract:

This study contributes to the pedagogical discussion on how to promote adolescents’ multiliteracies with the emphasis on information seeking and evaluation skills in contemporary media environments. The study is conducted in the school environment utilizing perspectives of educational sciences and information studies to health communication and teaching. The research focus is on the teacher role as a trusted person, who guides students to choose and use credible information sources. Evaluating the credibility of information may often be challenging. Specifically, children and adolescents may find it difficult to know what to believe and who to trust, for instance, in health and well-being communication. Thus, advanced multiliteracy skills are needed. In the school environment, trust is based on the teacher’s subject content knowledge, but also the teacher’s character and caring. Teacher’s benevolence and approachability generate trustworthiness, which lays the foundation for good interaction with students and further, for the teacher’s pedagogical authority. The study explores teachers’ perceptions of their pedagogical authority and the role of a trustee. In addition, the study examines what kind of multiliteracy practices teachers utilize in their teaching. The data will be collected by interviewing secondary school health education teachers during Spring 2019. The analysis method is a nexus analysis, which is an ethnographic research orientation. Classroom interaction as the interviewed teachers see it is scrutinized through a nexus analysis lens in order to expound a social action, where people, places, discourses, and objects are intertwined. The crucial social actions in this study are information seeking and evaluation situations, where the teacher and the students together assess the credibility of the information sources. The study is based on the hypothesis that a trustee’s opinions of credible sources and guidance in information seeking and evaluation affect students’, that is, trustors’ choices. In the school context, the teacher’s own experiences and perceptions of health-related issues cannot be brushed aside. Furthermore, adolescents are used to utilize digital technology for day-to-day information seeking, but the chosen information sources are often not very high quality. In the school, teachers are inclined to recommend familiar sources, such as health education textbook and web pages of well-known health authorities. Students, in turn, rely on the teacher’s guidance of credible information sources without using their own judgment. In terms of students’ multiliteracy competences, information seeking and evaluation tasks in health education are excellent opportunities to practice and enhance these skills. To distinguish the right information from a wrong one is particularly important in health communication because experts by experience are easy to find and their opinions are convincing. This can be addressed by employing the ideas of multiliteracy in the school subject health education and in teacher education and training.

Keywords: multiliteracies, nexus analysis, pedagogical authority, trust

Procedia PDF Downloads 107
578 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 147
577 An Integrated Approach to Handle Sour Gas Transportation Problems and Pipeline Failures

Authors: Venkata Madhusudana Rao Kapavarapu

Abstract:

The Intermediate Slug Catcher (ISC) facility was built to process nominally 234 MSCFD of export gas from the booster station on a day-to-day basis and to receive liquid slugs up to 1600 m³ (10,000 BBLS) in volume when the incoming 24” gas pipelines are pigged following upsets or production of non-dew-pointed gas from gathering centers. The maximum slug sizes expected are 812 m³ (5100 BBLS) in winter and 542 m³ (3400 BBLS) in summer after operating for a month or more at 100 MMSCFD of wet gas, being 60 MMSCFD of treated gas from the booster station, combined with 40 MMSCFD of untreated gas from gathering center. The water content is approximately 60% but may be higher if the line is not pigged for an extended period, owing to the relative volatility of the condensate compared to water. In addition to its primary function as a slug catcher, the ISC facility will receive pigged liquids from the upstream and downstream segments of the 14” condensate pipeline, returned liquids from the AGRP, pigged through the 8” pipeline, and blown-down fluids from the 14” condensate pipeline prior to maintenance. These fluids will be received in the condensate flash vessel or the condensate separator, depending on the specific operation, for the separation of water and condensate and settlement of solids scraped from the pipelines. Condensate meeting the colour and 200 ppm water specifications will be dispatched to the AGRP through the 14” pipeline, while off-spec material will be returned to BS-171 via the existing 10” condensate pipeline. When they are not in operation, the existing 24” export gas pipeline and the 10” condensate pipeline will be maintained under export gas pressure, ready for operation. The gas manifold area contains the interconnecting piping and valves needed to align the slug catcher with either of the 24” export gas pipelines from the booster station and to direct the gas to the downstream segment of either of these pipelines. The manifold enables the slug catcher to be bypassed if it needs to be maintained or if through-pigging of the gas pipelines is to be performed. All gas, whether bypassing the slug catcher or returning to the gas pipelines from it, passes through black powder filters to reduce the level of particulates in the stream. These items are connected to the closed drain vessel to drain the liquid collected. Condensate from the booster station is transported to AGRP through 14” condensate pipeline. The existing 10” condensate pipeline will be used as a standby and for utility functions such as returning condensate from AGRP to the ISC or booster station or for transporting off-spec fluids from the ISC back to booster station. The manifold contains block valves that allow the two condensate export lines to be segmented at the ISC, thus facilitating bi-directional flow independently in the upstream and downstream segments, which ensures complete pipeline integrity and facility integrity. Pipeline failures will be attended to with the latest technologies by remote techno plug techniques, and repair activities will be carried out as needed. Pipeline integrity will be evaluated with ili pigging to estimate the pipeline conditions.

Keywords: integrity, oil & gas, innovation, new technology

Procedia PDF Downloads 72
576 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation

Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes

Abstract:

Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.

Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor

Procedia PDF Downloads 136
575 Option Pricing Theory Applied to the Service Sector

Authors: Luke Miller

Abstract:

This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.

Keywords: option pricing theory, real options, service sector, valuation

Procedia PDF Downloads 355
574 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery

Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu

Abstract:

Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.

Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor

Procedia PDF Downloads 270
573 The Use of Stroke Journey Map in Improving Patients' Perceived Knowledge in Acute Stroke Unit

Authors: C. S. Chen, F. Y. Hui, B. S. Farhana, J. De Leon

Abstract:

Introduction: Stroke can lead to long-term disability, affecting one’s quality of life. Providing stroke education to patient and family members is essential to optimize stroke recovery and prevent recurrent stroke. Currently, nurses conduct stroke education by handing out pamphlets and explaining their contents to patients. However, this is not always effective as nurses have varying levels of knowledge and depth of content discussed with the patient may not be consistent. With the advancement of information technology, health education is increasingly being disseminated via electronic software and studies have shown this to have benefitted patients. Hence, a multi-disciplinary team consisting of doctors, nurses and allied health professionals was formed to create the stroke journey map software to deliver consistent and concise stroke education. Research Objectives: To evaluate the effectiveness of using a stroke journey map software in improving patients’ perceived knowledge in the acute stroke unit during hospitalization. Methods: Patients admitted to the acute stroke unit were given stroke journey map software during patient education. The software consists of 31 interactive slides that are brightly coloured and 4 videos, based on input provided by the multi-disciplinary team. Participants were then assessed with pre-and-post survey questionnaires before and after viewing the software. The questionnaire consists of 10 questions with a 5-point Likert scale which sums up to a total score of 50. The inclusion criteria are patients diagnosed with ischemic stroke and are cognitively alert and oriented. This study was conducted between May 2017 to October 2017. Participation was voluntary. Results: A total of 33 participants participated in the study. The results demonstrated that the use of a stroke journey map as a stroke education medium was effective in improving patients’ perceived knowledge. A comparison of pre- and post-implementation data of stroke journey map revealed an overall mean increase in patients’ perceived knowledge from 24.06 to 40.06. The data is further broken down to evaluate patients’ perceived knowledge in 3 domains: (1) Understanding of disease process; (2) Management and treatment plans; (3) Post-discharge care. Each domain saw an increase in mean score from 10.7 to 16.2, 6.9 to 11.9 and 6.6 to 11.7 respectively. Project Impact: The implementation of stroke journey map has a positive impact in terms of (1) Increasing patient’s perceived knowledge which could contribute to greater empowerment of health; (2) Reducing need for stroke education material printouts making it environmentally friendly; (3) Decreasing time nurses spent on giving education resulting in more time to attend to patients’ needs. Conclusion: This study has demonstrated the benefit of using stroke journey map as a platform for stroke education. Overall, it has increased patients’ perceived knowledge in understanding their disease process, the management and treatment plans as well as the discharge process.

Keywords: acute stroke, education, ischemic stroke, knowledge, stroke

Procedia PDF Downloads 161
572 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 76