Search results for: laboratory tools and equipment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7594

Search results for: laboratory tools and equipment

964 Export and Import Indicators of Georgian Agri-food Products during the Pandemic: Challenges and Opportunities

Authors: Eteri Kharaishvili

Abstract:

Introduction. The paper analyzes the main indicators of export and import of Georgian agri-food products; identifies positive and negative trends under the pandemic; based on the revealed problemssubstantiates the need formodernization ofin agri-food sector. It is argued that low production and productivity rates of food products negatively impact achieving the optimal export-to-import ratio; therefore, it leads toincreaseddependence on other countries andreduces the level of food security. Research objectives. The objective of the research is to identify the key challenges based on the analysis of export-import indicators of Georgian food products during the pandemic period and develop recommendations on the possibilities of post-pandemic perspectives. Research methods. Various theoretical and methodological research tools are used in the paper; in particular, a desk research is carried out on the research topic; endogenous and exogenous variables affecting export and import are determined through factor analysis; SWOT and PESTEL analysis are used to identify development opportunities; selection and groupingof data, identification of similarities and differences is carried outby using analysis, synthesis, sampling, induction and other methods; a qualitative study is conducted based on a survey of agri-food experts and exporters for clarifying the factors that impede export-import flows. Contributions. The factors that impede the export of Georgian agri-food products in the short run under COVID-19 pandemic are identified. These are: reduced income of farmers, delays in the supply of raw materials and supplies to the agri-food sectorfrom the neighboring industries, as well as in harvesting, processing, marketing, transportation, and other sectors; increased indirect costs, etc. The factors that impede the export in the long run areas follows loss of public confidence in the industry, risk of losing positions in traditional markets, etc. Conclusions are made on the problems in the field of export and import of Georgian agri-food products in terms of the pandemic; development opportunities are evaluated based on the analysis of the agri-food sector potential. Recommendations on the development opportunities for export and import of Georgian agri-food products in the post-pandemic period are proposed.

Keywords: agri-food products, export, and import, pandemic period, hindering factor, development potential

Procedia PDF Downloads 143
963 Victimization in Schizophrenia: A Cross-Sectional Prospective Study

Authors: Mehmet Budak, Mehmet Fatih Ustundag

Abstract:

Objectives: In this research, we studied the extent of exposure to physical violence and committing violence in patients diagnosed with schizophrenia in comparison to a control group consisting of patients with psychiatric diseases other than psychotic and mood disorders. Method: Between August 2019 and October 2019, a total of 100 hospitalized patients diagnosed with schizophrenia (clinically in remission, Brief Psychiatric Rate Scale < 30) were sequentially studied while undergoing inpatient treatment at Erenkoy Mental Health Training and Research Hospital. From the outpatient clinic, 50 patients with psychiatric disorders other than psychotic disorders or mood disorders were consecutively included as a control group. All participants were evaluated by the sociodemographic data that also questions the history of violence, physical examination, bilateral comparative hand, and forearm anterior-posterior and lateral radiography. Results: While 59% of patients with schizophrenia and 28% of the control group stated that they were exposed to physical violence at least once in a lifetime (p < 0,001); a defensive wound or fracture was detected in 29% of patients with schizophrenia and 2% of the control group (p < 0.001). On the other hand, 61% of patients diagnosed with schizophrenia, and 32% of the control group expressed that they committed physical violence at least once in a lifetime (p: 0.001). A self-destructive wound or fracture was detected in 53% of the patients with schizophrenia and 24% of the control group (p: 0,001). In the schizophrenia group, the rate of committing physical violence is higher in those with substance use compared to those without substance use (p:0.049). Also, wounds and bone fractures (boxer’s fracture) resulting from self-injury are more common in schizophrenia patients with substance use (p:0,002). In the schizophrenia group, defensive wounds and parry fractures (which are located in the hand, forearm, and arm usually occur as a result of a trial to shield the face against an aggressive attack and are known to be the indicators of interpersonal violence) are higher in those with substance use compared to those who do not (p:0,007). Conclusion: This study shows that exposure to physical violence and the rate of violence is higher in patients with schizophrenia compared to the control group. It is observed that schizophrenia patients who are stigmatized as being aggressive are more exposed to violence. Substance use in schizophrenia patients increases both exposure to physical violence and the use of physical violence. Physical examination and anamnesis that question violence are important tools to reveal the exposure to violence in patients. Furthermore, some specific bone fractures and wounds could be used to detect victimization even after plenty of time passes.

Keywords: fracture, physical violence, schizophrenia, substance use

Procedia PDF Downloads 169
962 Creative Mathematically Modelling Videos Developed by Engineering Students

Authors: Esther Cabezas-Rivas

Abstract:

Ordinary differential equations (ODE) are a fundamental part of the curriculum for most engineering degrees, and students typically have difficulties in the subsequent abstract mathematical calculations. To enhance their motivation and profit that they are digital natives, we propose a teamwork project that includes the creation of a video. It should explain how to model mathematically a real-world problem transforming it into an ODE, which should then be solved using the tools learned in the lectures. This idea was indeed implemented with first-year students of a BSc in Engineering and Management during the period of online learning caused by the outbreak of COVID-19 in Spain. Each group of 4 students was assigned a different topic: model a hot water heater, search for the shortest path, design the quickest route for delivery, cooling a computer chip, the shape of the hanging cables of the Golden Gate, detecting land mines, rocket trajectories, etc. These topics should be worked out through two complementary channels: a written report describing the problem and a 10-15 min video on the subject. The report includes the following items: description of the problem to be modeled, detailed obtention of the ODE that models the problem, its complete solution, and interpretation in the context of the original problem. We report the outcomes of this teaching in context and active learning experience, including the feedback received by the students. They highlighted the encouragement of creativity and originality, which are skills that they do not typically relate to mathematics. Additionally, the video format (unlike a common presentation) has the advantage of allowing them to critically review and self-assess the recording, repeating some parts until the result is satisfactory. As a side effect, they felt more confident about their oral abilities. In short, students agreed that they had fun preparing the video. They recognized that it was tricky to combine deep mathematical contents with entertainment since, without the latter, it is impossible to engage people to view the video till the end. Despite this difficulty, after the activity, they claimed to understand better the material, and they enjoyed showing the videos to family and friends during and after the project.

Keywords: active learning, contextual teaching, models in differential equations, student-produced videos

Procedia PDF Downloads 146
961 The Role of Online Platforms in Economic Growth and the Introduction of Local Culture in Tourist Areas

Authors: Maryam Nzari

Abstract:

Today, with the advancement of Internet technology, one of the tools used by humans is a tool that allows them to do what they need easily. Online platforms in different forms and by providing different services make it possible for users to communicate with each other and users with platforms. Audience communication with mass media is not the same as in the past. Today the conditions are different; With online platforms that provide the latest news minute by minute, he has access to all the content and can choose more quickly and easily. According to professionals Galloway, Apple, Amazon, Facebook and Google companies create a wide range. They are among the products and services that are connected with the daily life of billions of people all over the planet. Over time, platforms gain high economic value and in this way gain power that will influence the social, cultural, economic and political aspects of people’s lives. As a result of the effects of the process of platformization on all areas of individual and collective life, we now live in a platform society, which communicates It is close to “platform politics”. Nowadays, with social media platforms, users can interact with many people and people can share their data on various topics with others in this space. In this research, what will be investigated is the role of these online platforms in economic growth and the introduction of local culture areas in tourist areas. Tourism in a region is linked with various factors; One of the important factors that attract tourists to a region is its culture, and on the other hand, this culture can also affect economic growth. Without a proper understanding of the culture of these tourist areas, it is not possible to plan properly for the growth of the tourism industry and the subsequent increase in economic growth. The interaction of local people and tourists will have social and cultural effects on each other and will give them the opportunity to get to know each other. Therefore, the purpose of this research is to examine issues such as the role that online platforms play in cultural interaction in tourist areas and to understand that online platforms are only seeking to show the good aspects of a region and then generate enough extra income or that platforms can They play a role beyond what we imagine and introduce the culture of a region in a proper way so that we don’t see disagreements in the tourism planning of that region. in this article It has been tried by using library and field methods Answer the questions.

Keywords: online platforms, economic growth, culture Indigenous, tourism

Procedia PDF Downloads 58
960 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
959 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 127
958 D-Lysine Assisted 1-Ethyl-3-(3-Dimethylaminopropyl)Carbodiimide / N-Hydroxy Succinimide Initiated Crosslinked Collagen Scaffold with Controlled Structural and Surface Properties

Authors: G. Krishnamoorthy, S. Anandhakumar

Abstract:

The effect of D-Lysine (D-Lys) on collagen with 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide(EDC)/N-hydroxysuccinimide(NHS) initiated cross linking using experimental and modelling tools are evaluated. The results of the Coll-D-Lys-EDC/NHS scaffold also indicate an increase in the tensile strength (TS), percentage of elongation (% E), denaturation temperature (Td), and decrease the decomposition rate compared to L-Lys-EDC/NHS. Scanning electron microscopic (SEM) and atomic force microscopic (AFM) analyses revealed a well ordered with properly oriented and well-aligned structure of scaffold. The D-Lys stabilizes the scaffold against degradation by collagenase than L-Lys. The cell assay showed more than 98% fibroblast viability (NIH3T3) and improved cell adhesions, protein adsorption after 72h of culture when compared with native scaffold. Cell attachment after 74h was robust, with cytoskeletal analysis showing that the attached cells were aligned along the fibers assuming a spindle-shape appearance, despite, gene expression analyses revealed no apparent alterations in mRNA levels, although cell proliferation was not adversely affected. D-Lysine (D-Lys) plays a pivotal role in the self-assembly and conformation of collagen fibrils. The D-Lys assisted EDC/NHS initiated cross-linking induces the formation of an carboxamide by the activation of the side chain -COOH group, followed by aminolysis of the O-iso acylurea intermediates by the -NH2 groups are directly joined via an isopeptides bond. This leads to the formation of intra- and inter-helical cross links. Modeling studies indicated that D-Lys bind with collagen-like peptide (CLP) through multiple H-bonding and hydrophobic interactions. Orientational changes in collagenase on CLP-D-Lys are observed which may decrease its accessibility to degradation and stabilize CLP against the action of the former. D-Lys has lowest binding energy and improved fibrillar-assembly and staggered alignment without the undesired structural stiffness and aggregations. The proteolytic machinery is not well equipped to deal with Coll-D-Lys than Coll-L-Lys scaffold. The information derived from the present study could help in designing collagenolytically stable heterochiral collagen based scaffold for biomedical applications.

Keywords: collagen, collagenase, collagen like peptide, D-lysine, heterochiral collagen scaffold

Procedia PDF Downloads 392
957 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine

Authors: Luphumlo Ncanywa, Paul Watts

Abstract:

Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.

Keywords: antimalarial, flow, lumefantrine, synthesis

Procedia PDF Downloads 203
956 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon

Authors: Allaw Kamel, Al-Chami Leila

Abstract:

It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.

Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location

Procedia PDF Downloads 141
955 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function

Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros

Abstract:

The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.

Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change

Procedia PDF Downloads 174
954 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 266
953 Application of DSSAT-CSM Model for Estimating Rain-Water Productivity of Maize (Zea Mays L.) Under Changing Climate of Central Rift Valley, Ethiopia

Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke

Abstract:

Pressing demands for agricultural products and its associated pressure on water availability in the semi-arid areas demanded information for strategic decision-making in the changing climate conditions of Ethiopia. Availing such information through traditional agronomic research methods is not sufficient unless supported through the application of decision-support tools. The CERES (Crop Environmental Resource Synthesis) model in DSSAT-CSM was evaluated for estimating yield and water productivity of maize under two soil types (Andosol and Luvisol) of the Central Rift Valley of Ethiopia. A six-year data (2010 – 2017) obtained from national fertilizer determination experiments were used for model evaluation. Pertinent statistical indices were employed to evaluate model performance. Following model evaluation, yield and rain-water productivity of maize was assessed for the baseline (1981-2010) and future climate (2050’s and 2080’s) scenario. The model performed well in predicting phenology, growth, and yield of maize for the different seasons and phosphorous rates. A good agreement between simulated and observed grain yield was indicated by low values of the RMSE (0.15 - 0.37 Mg/ha) and other indices for the two soil types. The evaluated model predicted a decline in the potential (23.8 to 26.7% at Melkassa and from 21.7 to 26.1% at Ziway under RCP4.5 and RCP8.5 climate change scenarios, respectively) and water-limited yield (15 to 18.3% at Melkassa and by 6.5 to 10.5% at Ziway) in the mid-century due to climate change. Consequently, a decline in water productivity was projected in the future periods that necessitate availing options to improve water productivity in the region. In conclusion, the DSSAT-CERES-maize model can be used to simulate maize (Melkassa-2) phenology, growth and grain yield, as well as simulate water productivity under different management scenarios that can help to identify options to improve water productivity in the changing climate of the semi-arid central Rift valley of Ethiopia.

Keywords: andosol, CERES-maize, luvisol, model evaluation, water productivity

Procedia PDF Downloads 75
952 A Study of Smartphone Engagement Patterns of Millennial in India

Authors: Divyani Redhu, Manisha Rathaur

Abstract:

India has emerged as a very lucrative market for the smartphones in a very short span of time. The number of smartphone users here is growing massively with each passing day. Also, the expansion of internet services to far corners of the nation has also given a push to the smartphone revolution in India. Millennial, also known as Generation Y or the Net Generation is the generation born between the early 1980s and mid-1990s (some definitions extending further to early 2000s). Spanning roughly over 15 years, different social classes, cultures, and continents; it is irrational to imagine that millennial have a unified identity. But still, it cannot be denied that the growing millennial population is not only young but is highly tech-savvy too. It is not just the appearance of the device that today; we call it ‘smart’. Rather, it is the numerous tasks and functions that it can perform which has led its name to evolve as that of a ‘smartphone’. From usual tasks that were earlier performed by a simple mobile phone like making calls, sending messages, clicking photographs, recording videos etc.; today, the time has come where most of our day – to – day tasks are being taken care of by our all-time companion, i.e. smartphones. From being our alarm clock to being our note-maker, from our watch to our radio, our book-reader to our reminder, smartphones are present everywhere. Smartphone has now become an essential device for particularly the millennial to communicate not only with their friends but also with their family, colleagues, and teachers. The study by the researchers would be quantitative in nature. For the same, a survey would be conducted in particularly the capital of India, i.e. Delhi and the National Capital Region (NCR), which is the metropolitan area covering the entire National Capital Territory of Delhi and urban areas covering states of Haryana, Uttarakhand, Uttar Pradesh and Rajasthan. The tool of the survey would be a questionnaire and the number of respondents would be 200. The results derived from the study would primarily focus on the increasing reach of smartphones in India, smartphones as technological innovation and convergent tools, smartphone usage pattern of millennial in India, most used applications by the millennial, the average time spent by them, the impact of smartphones on the personal interactions of millennial etc. Thus, talking about the smartphone technology and the millennial in India, it would not be wrong to say that the growth, as well as the potential of the smartphones in India, is still immense. Also, very few technologies have made it possible to give a global exposure to the users and smartphone, if not the only one is certainly an immensely effective one that comes to the mind in this case.

Keywords: Delhi – NCR, India, millennial, smartphone

Procedia PDF Downloads 140
951 Smart Books as a Supporting Tool for Developing Skills of Designing and Employing Webquest 2.0

Authors: Huda Alyami

Abstract:

The present study aims to measure the effectiveness of an "Interactive eBook" in order to develop skills of designing and employing webquests for female intern teachers. The study uses descriptive analytical methodology as well as quasi-experimental methodology. The sample of the study consists of (30) female intern teachers from the Department of Special Education (in the tracks of Gifted Education and Learning Difficulties), during the first semester of the academic year 2015, at King Abdul-Aziz University in Jeddah city. The sample is divided into (15) female intern teachers for the experimental group, and (15) female intern teachers for the control group. A set of qualitative and quantitative tools have been prepared and verified for the study, embodied in: a list of the designing webquests' skills, a list of the employing webquests' skills, a webquests' knowledge achievement test, a product rating card, an observation card, and an interactive ebook. The study concludes the following results: 1. After pre-control, there are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of the experimental and the control groups in the post measurement of the webquests' knowledge achievement test, in favor of the experimental group. 2. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the product rating card in favor of the experimental group. 3. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the observation card for the experimental group. In the light of the previous findings, the study recommends the following: taking advantage of interactive ebooks when teaching all educational courses for various disciplines at the university level, creating educational participative platforms to share educational interactive ebooks for various disciplines at the local and regional levels. The study suggests conducting further qualitative studies on the effectiveness of interactive ebooks, in addition to conducting studies on the use of (Web 2.0) in webquests.

Keywords: interactive eBook, webquest, design, employing, develop skills

Procedia PDF Downloads 184
950 Process of the Emergence and Evolution of Socio-Cultural Ideas about the "Asian States" In the Context of the Development of US Cinema in 1941-1945

Authors: Selifontova Darya Yurievna

Abstract:

The study of the process of the emergence and evolution of socio-cultural ideas about the "Asian states" in the context of the development of US cinema in 1941-1945 will contribute both to the approbation of a new approach to the classical subject and will allow using the methodological tools of history, political science, philology, sociology for understanding modern military-political, historical, ideological, socio-cultural processes on a concrete example. This is especially important for understanding the process of constructing the image of the Japanese Empire in the USA. Assessments and images of China and Japan in World War II, created in American cinema, had an immediate impact on the media, public sentiment, and opinions. During the war, the US cinema created new myths and actively exploited old ones, combining them with traditional Hollywood cliches - all this served as a basis for creating the image of China and the Japanese Empire on the screen, which were necessary to solve many foreign policy and domestic political tasks related to the construction of two completely different, but at the same time, similar images of Asia (China and the Japanese Empire). In modern studies devoted to the history of wars, the study of the specifics of the information confrontation of the parties is in demand. A special role in this confrontation is played by propaganda through cinema, which uses images, historical symbols, and stable metaphors, the appeal to which can form a certain public reaction. Soviet documentaries of the war years are proof of this. The relevance of the topic is due to the fact that cinema as a means of propaganda was very popular and in demand during the Second World War. This period was the time of creation of real masterpieces in the field of propaganda films, in the documentary space of the cinema of 1941 – 1945. The traditions of depicting the Second World War were laid down. The study of the peculiarities of visualization and mythologization of the Second World War in Soviet cinema is the most important stage for studying the development of the specifics of propaganda methods since the methods and techniques of depicting the war formed in 1941-1945 are also significant at the present stage of the study of society.

Keywords: asian countries, politics, sociology, domestic politics, USA, cinema

Procedia PDF Downloads 127
949 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 91
948 Learners' Perception of Digitalization of Medical Education in a Low Middle-Income Country – A Case Study of the Lecturio Platform

Authors: Naomi Nathan

Abstract:

Introduction Digitalization of medical education can revolutionize how medical students learn and interact with the medical curriculum across contexts. With the increasing availability of the internet and mobile connectivity in LMICs, online medical education platforms and digital learning tools are becoming more widely available, providing new opportunities for learners to access high-quality medical education and training. However, the adoption and integration of digital technologies in medical education in LMICs is a complex process influenced by various factors, including learners' perceptions and attitudes toward digital learning. In Ethiopia, the adoption of digital platforms for medical education has been slow, with traditional face-to-face teaching methods still being the norm. However, as access to technology improves and more universities adopt digital platforms, it is crucial to understand how medical students perceive this shift. Methodology This study investigated medical students' perception of the digitalization of medical education in relation to their access to the Lecturio Digital Medical Education Platform through a capacity-building project. 740 medical students from over 20 medical universities participated in the study. The students were surveyed using a questionnaire that included their attitudes toward the digitalization of medical education, their frequency of use of the digital platform, and their perceived benefits and challenges. Results The study results showed that most medical students had a positive attitude toward digitalizing medical education. The most commonly cited benefit was the convenience and flexibility of accessing course material/curriculum online. Many students also reported that they found the platform more interactive and engaging, leading to a more meaningful learning experience. The study also identified several challenges medical students faced when using the platform. The most commonly reported challenge was the need for more reliable internet access, which made it difficult for students to access content consistently. Overall, the results of this study suggest that medical students in Ethiopia have a positive perception of the digitalization of medical education. Over 97% of students continuously expressed a need for access to the Lecturio platform throughout their studies. Conclusion Significant challenges still need to be addressed to fully realize the Lecturio digital platform's benefits. Universities, relevant ministries, and various stakeholders must work together to address these challenges to ensure that medical students fully participate in and benefit from digitalized medical education - sustainably and effectively.

Keywords: digital medical education, EdTech, LMICs, e-learning

Procedia PDF Downloads 92
947 Of Digital Games and Dignity: Rationalizing E-Sports Amidst Stereotypes Associated with Gamers

Authors: Sarthak Mohapatra, Ajith Babu, Shyam Prasad Ghosh

Abstract:

The community of gamers has been at the crux of stigmatization and marginalization by the larger society, resulting in dignity erosion. India presents a unique context where e-sports have recently seen large-scale investments, a massive userbase, and appreciable demand for gaming as a career option. Yet the apprehension towards gaming is salient among parents and non-gamers who engage in the de-dignification of gamers, by advocating the discourse of violence promotion via video games. Even the government is relentless in banning games due to data privacy issues. Thus, the current study explores the experiences of gamers and how they navigate these de-dignifying circumstances. The study follows an exploratory qualitative approach where in-depth interviews are used as data collection tools guided by a semi-structured questionnaire. A total of 25 individuals were interviewed comprising casual gamers, professional gamers, and individuals who are indirectly impacted by gaming including parents, relatives, and friends of gamers. Thematic analysis via three-level coding is used to arrive at broad themes (categories) and their sub-themes. The results indicate that the de-dignification of gamers results from attaching stereotypes of introversion, aggression, low intelligence, and low aspirations to them. It is interesting to note that the intensity of de-dignification varies and is more salient in violent shooting games which are perceived to require low cognitive resources to master. The moral disengagement of gamers while playing violent video games becomes the basis for de-dignification. Findings reveal that circumventing de-dignification required gamers to engage in several tactics that included playing behind closed doors, consciously hiding the gamer identity, rationalizing behavior by idolizing professionals, bragging about achievements within the game, and so on. Theoretically, it contributes to dignity and social identity literature by focusing on stereotyping and stigmatization. From a policy perspective, improving legitimacy toward gaming is expected to improve the social standing of gamers and professionals. For practitioners, it is important that proper channels of promotion and communication are used to educate the non-gamers so that the stereotypes blur away.

Keywords: dignity, social identity, stereotyping, video games

Procedia PDF Downloads 100
946 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy

Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides

Abstract:

This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.

Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality

Procedia PDF Downloads 191
945 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 423
944 Application of GIS Techniques for Analysing Urban Built-Up Growth of Class-I Indian Cities: A Case Study of Surat

Authors: Purba Biswas, Priyanka Dey

Abstract:

Worldwide rapid urbanisation has accelerated city expansion in both developed and developing nations. This unprecedented urbanisation trend due to the increasing population and economic growth has caused challenges for the decision-makers in city planning and urban management. Metropolitan cities, class-I towns, and major urban centres undergo a continuous process of evolution due to interaction between socio-cultural and economic attributes. This constant evolution leads to urban expansion in all directions. Understanding the patterns and dynamics of urban built-up growth is crucial for policymakers, urban planners, and researchers, as it aids in resource management, decision-making, and the development of sustainable strategies to address the complexities associated with rapid urbanisation. Identifying spatio-temporal patterns of urban growth has emerged as a crucial challenge in monitoring and assessing present and future trends in urban development. Analysing urban growth patterns and tracking changes in land use is an important aspect of urban studies. This study analyses spatio-temporal urban transformations and land-use and land cover changes using remote sensing and GIS techniques. Built-up growth analysis has been done for the city of Surat as a case example, using the GIS tools of NDBI and GIS models of the Built-up Urban Density Index and Shannon Entropy Index to identify trends and the geographical direction of transformation from 2005 to 2020. Surat is one of the fastest-growing urban centres in both the state and the nation, ranking as the 4th fastest-growing city globally. This study analyses the dynamics of urban built-up area transformations both zone-wise and geographical direction-wise, in which their trend, rate, and magnitude were calculated for the period of 15 years. This study also highlights the need for analysing and monitoring the urban growth pattern of class-I cities in India using spatio-temporal and quantitative techniques like GIS for improved urban management.

Keywords: urban expansion, built-up, geographic information system, remote sensing, Shannon’s entropy

Procedia PDF Downloads 72
943 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 407
942 Biogas Production Using Water Hyacinth as a Means of Waste Management Control at Hartbeespoort Dam, South Africa

Authors: Trevor Malambo Simbayi, Diane Hildebrandt, Tonderayi Matambo

Abstract:

The rapid growth of population in recent decades has resulted in an increased need for energy to meet human activities. As energy demands increase, the need for other sources of energy other than fossil fuels, increases in turn. Furthermore, environmental concerns such as global warming due to the use of fossil fuels, depleting fossil fuel reserves and the rising cost of oil have contributed to an increased interest in renewables sources of energy. Biogas is a renewable source of energy produced through the process of anaerobic digestion (AD) and it offers a two-fold solution; it provides an environmentally friendly source of energy and its production helps to reduce the amount of organic waste taken to landfills. This research seeks to address the waste management problem caused by an aquatic weed called water hyacinth (Eichhornia crassipes) at the Hartbeespoort (Harties) Dam in the North West Province of South Africa, through biogas production of the weed. Water hyacinth is a category 1 invasive species and it is deemed to be the most problematic aquatic weed. This weed is said to double its size in the space of five days. Eutrophication in the Hartbeespoort Dam has manifested itself through the excessive algae bloom and water hyacinth infestation. A large amount of biomass from water hyacinth and algae are generated per annum from the two hundred hectare surface area of the dam exposed to the sun. This biomass creates a waste management problem. Water hyacinth when in full bloom can cover nearly half of the surface of Hartbeespoort Dam. The presence of water hyacinth in the dam has caused economic and environmental problems. Economic activities such as fishing, boating, and recreation, are hampered by the water hyacinth’s prolific growth. This research proposes the use of water hyacinth as a feedstock or substrate for biogas production in order to find an economic and environmentally friendly means of waste management for the communities living around the Hartbeespoort Dam. In order to achieve this objective, water hyacinth will be collected from the dam and it will be mechanically pretreated before anaerobic digestion. Pretreatment is required for lignocellulosic materials like water hyacinth because such materials are called recalcitrant solid materials. Cow manure will be employed as a source of microorganisms needed for biogas production to occur. Once the water hyacinth and the cow dung are mixed, they will be placed in laboratory anaerobic reactors. Biogas production will be monitored daily through the downward displacement of water. Characterization of the substrates (cow manure and water hyacinth) to determine the nitrogen, sulfur, carbon and hydrogen, total solids (TS) and volatile solids (VS). Liquid samples from the anaerobic digesters will be collected and analyzed for volatile fatty acids (VFAs) composition by means of a liquid gas chromatography machine.

Keywords: anaerobic digestion, biogas, waste management, water hyacinth

Procedia PDF Downloads 196
941 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 224
940 The Spatial Classification of China near Sea for Marine Biodiversity Conservation Based on Bio-Geographical Factors

Authors: Huang Hao, Li Weiwen

Abstract:

Global biodiversity continues to decline as a result of global climate change and various human activities, such as habitat destruction, pollution, introduction of alien species and overfishing. Although there are connections between global marine organisms more or less, it is better to have clear geographical boundaries in order to facilitate the assessment and management of different biogeographical zones. And so area based management tools (ABMT) are considered as the most effective means for the conservation and sustainable use of marine biodiversity. On a large scale, the geographical gap (or barrier) is the main factor to influence the connectivity, diffusion, ecological and evolutionary process of marine organisms, which results in different distribution patterns. On a small scale, these factors include geographical location, geology, and geomorphology, water depth, current, temperature, salinity, etc. Therefore, the analysis on geographic and environmental factors is of great significance in the study of biodiversity characteristics. This paper summarizes the marine spatial classification and ABMTs used in coastal area, open oceans and deep sea. And analysis principles and methods of marine spatial classification based on biogeographic related factors, and take China Near Sea (CNS) area as case study, and select key biogeographic related factors, carry out marine spatial classification at biological region scale, ecological regionals scale and biogeographical scale. The research shows that CNS is divided into 5 biological regions by climate and geographical differences, the Yellow Sea, the Bohai Sea, the East China Sea, the Taiwan Straits, and the South China Sea. And the bioregions are then divided into 12 ecological regions according to the typical ecological and administrative factors, and finally the eco-regions are divided into 98 biogeographical units according to the benthic substrate types, depth, coastal types, water temperature, and salinity, given the integrity of biological and ecological process, the area of the biogeographical units is not less than 1,000 km². This research is of great use to the coastal management and biodiversity conservation for local and central government, and provide important scientific support for future spatial planning and management of coastal waters and sustainable use of marine biodiversity.

Keywords: spatial classification, marine biodiversity, bio-geographical, conservation

Procedia PDF Downloads 152
939 Combined Pneumomediastinum and Pneumothorax Due to Hyperemesis Gravidarum

Authors: Fayez Hanna, Viet Tran

Abstract:

A 20 years old lady- primigravida 6 weeks pregnant with unremarkable past history, presented to the emergency department at the Royal Hobart Hospital, Tasmania, Australia, with hyperemesis gravidarum associated with, dehydration and complicated with hematemesis and chest pain resistant. Accordingly, we conducted laboratory investigations which revealed: FBC: WBC 23.9, unremarkable U&E, LFT, lipase and her VBG showed a pH 7.4, pCo2 36.7, cK+ 3.2, cNa+ 142. The decision was made to do a chest X-ray (CXR) after explaining the risks/benefit of performing radiographic investigations during pregnancy and considering the patient's plan for the termination of the pregnancy as she was not ready for motherhood for shared decision-making and consent to look for pneumoperitoneum to suggest perforated viscus that might cause the hematemesis. However, the CXR showed pneumomediastinum but no evidence of pneumoperitoneum or pneumothorax. Consequently, a decision was made to proceed with CT oesophagography with imaging pre and post oral contrast administration to identify a potential oesophageal tear since it could not be excluded using a plain film of the CXR. The CT oesophagography could not find a leak for the administered oral contrast and thus, no oesophageal tear could be confirmed but could not exclude the Mallory-Weiss tear (lower oesophageal tear). Further, the CT oesophagography showed an extensive pneumomediastinum that could not be confirmed to be pulmonary in origin noting the presence of bilateral pulmonary interstitial emphysema and pneumothorax in the apex of the right lung that was small. The patient was admitted to the Emergency Department Inpatient Unit for monitoring, supportive therapy, and symptomatic management. Her hyperemesis was well controlled with ondansetron 8mg IV, metoclopramide 10mg IV, doxylamine 25mg PO, pyridoxine 25mg PO, esomeprazole 40mg IV and oxycodone 5mg PO was given for pain control and 2 litter of IV fluid. The patient was stabilized after 24 hours and discharged home on ondansetron 8mg every 8 hours whereas the patient had a plan for medical termination of pregnancy. Three weeks later, the patient represented with nausea and vomiting complicated by a frank hematemesis. Her observation chart showed HR 117- other vital signs were normal. Pathology showed WBC 14.3 with normal U&E and Hb. The patient was managed in the Emergency Department with the same previous regimen and was discharged home on same previous regimes. Five days later, she presented again with nausea, vomiting and hematemesis and was admitted under obstetrics and gynaecology for stabilization then discharged home with a plan for surgical termination of pregnancy after 3-days rather than the previously planned medical termination of pregnancy to avoid extension of potential oesophageal tear. The surgical termination and follow up period were uneventful. The case is considered rare as pneumomediastinum is a very rare complication of hyperemesis gravidarum where vomiting-induced barotrauma leads to a ruptured oesophagus and air leak into the mediastinum. However no rupture oesophagus in our case. Although the combination of pneumothorax and pneumomediastinum without oesophageal tear was reported only 8 times in the literature, but none of them was due to hyperemesis gravidarum.

Keywords: Pneumothorax, pneumomediastinum, hyperemesis gravidarum, pneumopericardium

Procedia PDF Downloads 102
938 Update on Epithelial Ovarian Cancer (EOC), Types, Origin, Molecular Pathogenesis, and Biomarkers

Authors: Salina Yahya Saddick

Abstract:

Ovarian cancer remains the most lethal gynecological malignancy due to the lack of highly sensitive and specific screening tools for detection of early-stage disease. The OSE provides the progenitor cells for 90% of human ovarian cancers. Recent morphologic, immunohistochemical and molecular genetic studies have led to the development of a new paradigm for the pathogenesis and origin of epithelial ovarian cancer (EOC) based on a ualistic model of carcinogenesis that divides EOC into two broad categories designated Types I and II which are characterized by specific mutations, including KRAS, BRAF, ERBB2, CTNNB1, PTEN PIK3CA, ARID1A, and PPPR1A, which target specific cell signaling pathways. Type 1 tumors rarely harbor TP53. type I tumors are relatively genetically stable and typically display a variety of somatic sequence mutations that include KRAS, BRAF, PTEN, PIK3CA CTNNB1 (the gene encoding beta catenin), ARID1A and PPP2R1A but very rarely TP53 . The cancer stem cell (CSC) hypothesis postulates that the tumorigenic potential of CSCs is confined to a very small subset of tumor cells and is defined by their ability to self-renew and differentiate leading to the formation of a tumor mass. Potential protein biomarker miRNA, are promising biomarkers as they are remarkably stable to allow isolation and analysis from tissues and from blood in which they can be found as free circulating nucleic acids and in mononuclear cells. Recently, genomic anaylsis have identified biomarkers and potential therapeutic targets for ovarian cancer namely, FGF18 which plays an active role in controlling migration, invasion, and tumorigenicity of ovarian cancer cells through NF-κB activation, which increased the production of oncogenic cytokines and chemokines. This review summarizes update information on epithelial ovarian cancers and point out to the most recent ongoing research.

Keywords: epithelial ovarian cancers, somatic sequence mutations, cancer stem cell (CSC), potential protein, biomarker, genomic analysis, FGF18 biomarker

Procedia PDF Downloads 380
937 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 438
936 Courtesy to Things and Sense of Unity with the Things: Psychological Evaluation Based on the Teaching of Buddha

Authors: H. Kamide, T. Arai

Abstract:

This study aims to clarify factors of courtesy to things and the effect of courtesy on a sense of unity with things based on the teaching of Buddha. The teaching of Buddha explains when dealing with things in a courteous manner carefully, the border between selves and the external world disappears, then both are united. This is an example in Buddhist way that explains the connections with all existences, and in the modern world, it is also a lesson that humans should not let matters go to waste and treat them politely. In order to reveal concrete ways to practice courtesy to things, we clarify the factors of courtesy (Study 1) and examine the effect of courtesy on the sense of unity with the things (Study 2). In Study 1, 100 Japanese (mean age=54.39, SD=15.04, 50% female) described freely about what is courtesy to things that they use daily. These descriptions were classified, and 25 items were made asking for the degree of courtesy to the things. Then different 678 Japanese (mean age=44.72, SD=13.14, 50% female) answered the 25 items on 7-point about tools they use daily. An exploratory factor analysis revealed two factors. The first factor (α=.97) includes 'I deal with the thing carefully' and 'I clean up the thing after use'. This factor reflects how gently people care about things. The second factor (α=.96) includes 'A sense of self-control has come to me through using the thing' and 'I have got inner strength by taking care of the thing'. The second factor reflects how people learn by dealing with things carefully. In this Study 2, 200 Japanese (mean age=49.39, SD=11.07, 50% female) answered courtesy about things they use daily and the degree of sense of unity with the things using the inclusion of other in the self scale, replacing 'Other' with 'Your thing'. The ANOVA was conducted to examine the effect of courtesy (high/low level of two factors) on the score of sense of unity. The results showed the main effect of care level. People with a high level of care have a stronger sense of unity with the thing. The tendency of an interaction effect is also found. The condition with a high level of care and a high level of learning enhances the sense of unity more than the condition of a low level of care and high level in learning. Study 1 found that courtesy is composed of care and learning. That is, courtesy is not only active care to the things but also to learn the meaning of the things and grow personally with the things. Study 2 revealed that people with a high level of care feel a stronger sense of unity and also people with both a high level of care and learn tend to do so. The findings support the idea of the teaching of Buddha. In the future, it is necessary to examine a combined effect of care and learning.

Keywords: courtesy, things, sense of unity, the teaching of Buddha

Procedia PDF Downloads 150
935 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63