Search results for: Face Detection
47 Needs of Omani Children in First Grade during Their Transition from Kindergarten to Primary School: An Ethnographic Study
Authors: Zainab Algharibi, Julie McAdam, Catherine Fagan
Abstract:
The purpose of this paper is to shed light on how Omani children in the first grade experience their needs during their transition to primary school. Theoretically, the paper was built on two perspectives: Dewey's concept of continuity of experience and the boundary objects introduced by Vygotsky (CHAT). The methodology of the study is based on the crucial role of children’s agency which is a very important activity as an educational tool to enhance the child’s participation in the learning process and develop their ability to face various issues in their life. Thus, the data were obtained from 45 children in grade one from four different primary schools using drawing and visual narrative activities, in addition to researcher observations during the start of the first weeks of the academic year for the first grade. As the study dealt with children, all of the necessary ethical laws were followed. This paper is considered original since it seeks to deal with the issue of children's transition from kindergarten to primary school not only in Oman, but in the Arab region. Therefore, it is expected to fill an important gap in this field and present a proposal that will be a door for researchers to enter this research field later. The analysis of drawing and visual narrative was performed according to the social semiotics approach in two phases. The first is to read out the surface message “denotation,” while the second is to go in-depth via the symbolism obtained from children while they talked and drew letters and signs. This stage is known as “signified”; a video was recorded of each child talking about their drawing and expressing themself. Then, the data were organised and classified according to a cross-data network. Regarding the researcher observation analyses, the collected data were analysed according to the "grounded theory". It is based on comparing the recent data collected from observations with data previously encoded by other methods in which children were drawing alongside the visual narrative in the current study, in order to identify the similarities and differences, and also to clarify the meaning of the accessed categories and to identify sub-categories of them with a description of possible links between them. This is a kind of triangulation in data collection. The study came up with a set of findings, the most vital being that the children's greatest interest goes to their social and psychological needs, such as friends, their teacher, and playing. Also, their biggest fears are a new place, a new teacher, and not having friends, while they showed less concern for their need for educational knowledge and skills.
Keywords: Children’s academic needs, children’s social needs, children transition, primary school.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18246 The Difficulties Witnessed by People with Intellectual Disability in Transition to Work in Saudi Arabia
Authors: Adel S. Alanazi
Abstract:
The transition of a student with a disability from school to work is the most crucial phase while moving from the stage of adolescence into early adulthood. In this process, young individuals face various difficulties and challenges in order to accomplish the next venture of life successfully. In this respect, this paper aims to examine the challenges encountered by the individuals with intellectual disabilities in transition to work in Saudi Arabia. For this purpose, this study has undertaken a qualitative research-based methodology; wherein interpretivist philosophy has been followed along with inductive approach and exploratory research design. The data for the research has been gathered with the help of semi-structured interviews, whose findings are analysed with the help of thematic analysis. Semi-structured interviews were conducted with parents of persons with intellectual disabilities, officials, supervisors and specialists of two vocational rehabilitation centres providing training to intellectually disabled students, in addition to that, directors of companies and websites in hiring those individuals. The total number of respondents for the interview was 15. The purposive sampling method was used to select the respondents for the interview. This sampling method is a non-probability sampling method which draws respondents from a known population and allows flexibility and suitability in selecting the participants for the study. The findings gathered from the interview revealed that the lack of awareness among their parents regarding the rights of their children who are intellectually disabled; the lack of adequate communication and coordination between various entities; concerns regarding their training and subsequent employment are the key difficulties experienced by the individuals with intellectual disabilities. Training in programmes such as bookbinding, carpentry, computing, agriculture, electricity and telephone exchange operations were involved as key training programmes. The findings of this study also revealed that information technology and media were playing a significant role in smoothing the transition to employment of individuals with intellectual disabilities. Furthermore, religious and cultural attitudes have been identified to be restricted for people with such disabilities in seeking advantages from job opportunities. On the basis of these findings, it can be implied that the information gathered through this study will serve to be highly beneficial for Saudi Arabian schools/ rehabilitation centres for individuals with intellectual disability to facilitate them in overcoming the problems they encounter during the transition to work.
Keywords: Intellectual disability, transition services, rehabilitation centre.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132745 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.
Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52844 Comparison of Methods for the Detection of Biofilm Formation in Yeast and Lactic Acid Bacteria Species Isolated from Dairy Products
Authors: Goksen Arik, Mihriban Korukluoglu
Abstract:
Lactic acid bacteria (LAB) and some yeast species are common microorganisms found in dairy products and most of them are responsible for the fermentation of foods. Such cultures are isolated and used as a starter culture in the food industry because of providing standardisation of the final product during the food processing. Choice of starter culture is the most important step for the production of fermented food. Isolated LAB and yeast cultures which have the ability to create a biofilm layer can be preferred as a starter in the food industry. The biofilm formation could be beneficial to extend the period of usage time of microorganisms as a starter. On the other hand, it is an undesirable property in pathogens, since biofilm structure allows a microorganism become more resistant to stress conditions such as antibiotic presence. It is thought that the resistance mechanism could be turned into an advantage by promoting the effective microorganisms which are used in the food industry as starter culture and also which have potential to stimulate the gastrointestinal system. Development of the biofilm layer is observed in some LAB and yeast strains. The resistance could make LAB and yeast strains dominant microflora in the human gastrointestinal system; thus, competition against pathogen microorganisms can be provided more easily. Based on this circumstance, in the study, 10 LAB and 10 yeast strains were isolated from various dairy products, such as cheese, yoghurt, kefir, and cream. Samples were obtained from farmer markets and bazaars in Bursa, Turkey. As a part of this research, all isolated strains were identified and their ability of biofilm formation was detected with two different methods and compared with each other. The first goal of this research was to determine whether isolates have the potential for biofilm production, and the second was to compare the validity of two different methods, which are known as “Tube method” and “96-well plate-based method”. This study may offer an insight into developing a point of view about biofilm formation and its beneficial properties in LAB and yeast cultures used as a starter in the food industry.
Keywords: Biofilm, dairy products, lactic acid bacteria, yeast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125143 Strengthening Adult Literacy Programs in Order to End Female Genital Mutilation to Achieve Sustainable Development Goals
Authors: Odenigbo Veronica Ngozi, Lorreta Chika Ukwuaba
Abstract:
This study focuses on how the strengthening adult literacy programs can help accelerate transformative strategies to end Female Genital Mutilation (FGM) in Nigeria, specifically in Nsukka Local Government Area of Enugu State. The research delved into the definition of FGM, adult literacy programs, and how to achieve ending FGM in order to attain Sustainable Development Goals (SDGs) in 2030. It further discussed the practice of FGM in Nigeria and emphasized the statement of the problem. Main purpose of the study was to investigate how strengthening adult literacy programs can help accelerate transformative strategies to end FGM in Nigeria and achieve SDGs in 2030. A survey research design was used to conduct the study in Nsukka L.G.A. The population was composed of 26 facilitators and adult learners in five adult learning centres in the area. The entire population was used as a sample. Structured questionnaires were employed to elicit information from the respondents. The items on the questionnaire were face-validated by three experts while the reliability of the instrument was verified using Cronbach Alpha Reliability Technique. The research questions were analysed using means and standard deviation while the hypothesis was tested at 0.05 level of degree of significance using a t-test statistics. The result of the findings shows that the practices of FGM can end through strengthening adult literacy programs. Strengthening adult literacy programs is a good channel to end or stop FGM through the knowledge and skill acquired from the learning centres. The theoretical importance of the study lies in the fact that it highlights the role of adult literacy programs in accelerating transformative strategies to combat harmful cultural practices such as FGM. It further supports the importance of education and knowledge in achieving SDGs by 2030. The study addressed the question of how strengthening adult literacy programs can help accelerate transformative strategies which can end FGM in Nigeria and achieve SDGs by 2030. In conclusion, the study revealed that adult literacy is a good tool to end FGM in Nigeria. The recommendation was that (NGOs), community-based organizations (CBOs), and individuals should support the funding and establishment of adult literacy centres in communities so as to reach all illiterate parents or individuals so that they can acquire the knowledge and skill needed to understand the negative effect of FGM in the life of a girl child.
Keywords: Adult literacy, female genital mutilation, learning centres, Sustainable Development Goals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6542 Synthesis and Fluorescence Spectroscopy of Sulphonic Acid-Doped Polyaniline When Exposed to Oxygen Gas
Authors: S.F.S. Draman, R. Daik, A. Musa
Abstract:
Three sulphonic acid-doped polyanilines were synthesized through chemical oxidation at low temperature (0-5 oC) and potential of these polymers as sensing agent for O2 gas detection in terms of fluorescence quenching was studied. Sulphuric acid, dodecylbenzene sulphonic acid (DBSA) and camphor sulphonic acid (CSA) were used as doping agents. All polymers obtained were dark green powder. Polymers obtained were characterized by Fourier transform infrared spectroscopy, ultraviolet-visible absorption spectroscopy, thermogravimetry analysis, elemental analysis, differential scanning calorimeter and gel permeation chromatography. Characterizations carried out showed that polymers were successfully synthesized with mass recovery for sulphuric aciddoped polyaniline (SPAN), DBSA-doped polyaniline (DBSA-doped PANI) and CSA-doped polyaniline (CSA-doped PANI) of 71.40%, 75.00% and 39.96%, respectively. Doping level of SPAN, DBSAdoped PANI and CSA-doped PANI were 32.86%, 33.13% and 53.96%, respectively as determined based on elemental analysis. Sensing test was carried out on polymer sample in the form of solution and film by using fluorescence spectrophotometer. Samples of polymer solution and polymer film showed positive response towards O2 exposure. All polymer solutions and films were fully regenerated by using N2 gas within 1 hour period. Photostability study showed that all samples of polymer solutions and films were stable towards light when continuously exposed to xenon lamp for 9 hours. The relative standard deviation (RSD) values for SPAN solution, DBSA-doped PANI solution and CSA-doped PANI solution for repeatability were 0.23%, 0.64% and 0.76%, respectively. Meanwhile RSD values for reproducibility were 2.36%, 6.98% and 1.27%, respectively. Results for SPAN film, DBSAdoped PANI film and CSA-doped PANI film showed the same pattern with RSD values for repeatability of 0.52%, 4.05% and 0.90%, respectively. Meanwhile RSD values for reproducibility were 2.91%, 10.05% and 7.42%, respectively. The study on effect of the flow rate on response time was carried out using 3 different rates which were 0.25 mL/s, 1.00 mL/s and 2.00 mL/s. Results obtained showed that the higher the flow rate, the shorter the response time.Keywords: conjugated polymer, doping, fluorescence quenching, oxygen gas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 239941 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129140 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66539 A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations
Authors: Amy LeBar, Kim L. Trenbath, Bennett Doherty, William Livingood
Abstract:
Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.
Keywords: commercial buildings, grid-interactive efficient buildings, miscellaneous electric loads, plug loads, plug load management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87838 Qualitative Profiling in Practice: The Italian Public Employment Services Experience
Authors: L. Agneni, F. Carta, C. Micheletta, V. Tersigni
Abstract:
The development of a qualitative method to profile jobseekers is needed to improve the quality of the Public Employment Services (PES) in Italy. This is why the National Agency for Active Labour Market Policies (ANPAL) decided to introduce a Qualitative Profiling Service in the context of the activities carried out by local employment offices’ operators. The qualitative profiling service provides information and data regarding the jobseeker’s personal transition status, through a semi-structured questionnaire administered to PES clients during the guidance interview. The questionnaire responses allow PES staff to identify, for each client, proper activities and policy measures to support jobseekers in their reintegration into the labour market. Data and information gathered by the qualitative profiling tool are the following: frequency, modalities and motivations for clients to apply to local employment offices; clients’ expectations and skills; difficulties that they have faced during the previous working experiences; strategies, actions undertaken and activated channels for job search. These data are used to assess jobseekers’ personal and career characteristics and to measure their employability level (qualitative profiling index), in order to develop and deliver tailor-made action programmes for each client. This paper illustrates the use of the above-mentioned qualitative profiling service on the national territory and provides an overview of the main findings of the survey: concerning the difficulties that unemployed people face in finding a job and their perception of different aspects related to the transition in the labour market. The survey involved over 10.000 jobseekers registered with the PES. Most of them are beneficiaries of the “citizens' income”, a specific active labour policy and social inclusion measure. Furthermore, data analysis allows classifying jobseekers into a specific group of clients with similar features and behaviours, on the basis of socio-demographic variables, customers' expectations, needs and required skills for the profession for which they seek employment. Finally, the survey collects PES staff opinions and comments concerning clients’ difficulties in finding a new job and also their strengths. This is a starting point for PESs’ operators to define adequate strategies to facilitate jobseekers’ access or reintegration into the labour market.
Keywords: Labour market transition, Public Employment Services, qualitative profiling, vocational guidance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59137 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64936 Position of the Constitutional Court of the Russian Federation on the Matter of Restricting Constitutional Rights of Citizens Concerning Banking Secrecy
Authors: A. V. Shashkova
Abstract:
The aim of the present article is to analyze the position of the Constitutional Court of the Russian Federation on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The methodological ground of the present Article represents the dialectic scientific method of the socio-political, legal and organizational processes with the principles of development, integrity, and consistency, etc. The consistency analysis method is used while researching the object of the analysis. Some public-private research methods are also used: the formally-logical method or the comparative legal method, are used to compare the understanding of the ‘secrecy’ concept. The aim of the present article is to find the root of the problem and to give recommendations for the solution of the problem. The result of the present research is the author’s conclusion on the necessity of the political will to improve Russian legislation with the aim of compliance with the provisions of the Constitution. It is also necessary to establish a clear balance between the constitutional rights of the individual and the limit of these rights when carrying out various control activities by public authorities. Attempts by the banks to "overdo" an anti-money laundering law under threat of severe sanctions by the regulators actually led to failures in the execution of normal economic activity. Therefore, individuals face huge problems with payments on the basis of clearing, in addition to problems with cash withdrawals. The Bank of Russia sets requirements for banks to execute Federal Law No. 115-FZ too high. It is high place to attract political will here. As well, recent changes in Russian legislation, e.g. allowing banks to refuse opening of accounts unilaterally, simplified banking activities in the country. The article focuses on different theoretical approaches towards the concept of “secrecy”. The author gives an overview of the practices of Spain, Switzerland and the United States of America on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The Constitutional Court of the Russian Federation basing on the Constitution of the Russian Federation has its special understanding of the issue, which should be supported by further legislative development in the Russian Federation.Keywords: Bank secrecy, banking information, constitutional court, control measures, financial control, money laundering, restriction of constitutional rights.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117135 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. However, to the best of our knowledge, their status in children is not clear. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of 68 children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, 35 children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised 33 OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p < 0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r = 0.336 and p = 0.065 in the N-BMI group. On the other hand, they were r = 0.707 and p = 0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p < 0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. TRG are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. TRG are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs. FIB-4 correlations in N-BMI and OB groups (no correlation vs. high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.
Keywords: APRI, FIB-4, obesity, triglycerides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21734 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.
Keywords: Data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51633 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: Landsat 8, oligotrophic lake, remote sensing, water quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155632 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 274331 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials
Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas
Abstract:
Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189530 Climate Related Financial Risk for Automobile Industry and Impact to Financial Institutions
Authors: S. Mahalakshmi, B. Senthil Arasu
Abstract:
As per the recent changes happening in the global policies, climate related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate related changes can happen often and lead to risk and lot of uncertainty, but need to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed, so that the financial institutions can plan to mitigate it. Climate related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and others. And the models required to compute this have to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out to the suggestion that the climate related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries, instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, we present a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios, and how the different transition risks affect the risk associated with the different parties. This research paper delves on the topic of increase in concentration of greenhouse gases, that in turn causing global warming. It then considers the various scenarios of having the different risk drivers impacting credit and market risk of an institution, by understanding the transmission channels, and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II capital calculations, and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.
Keywords: Capital calculation, climate risk, credit risk, pillar II risk, scenario modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42529 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines
Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder
Abstract:
One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.Keywords: Affective computing, emotion recognition, humanoid robot, Human-Robot-Interaction (HRI), social robots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135628 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188927 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).
Keywords: Mininet, OpenFlow, POX controller, SDN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297226 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery
Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene
Abstract:
Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.
Keywords: Multi-objective decision support, analysis, data validation, freight delivery, multi-modal transportation, genetic programming methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48525 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners
Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani
Abstract:
Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.
Keywords: Accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117424 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: Agricultural engineering, computer vision, image processing, flower detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236723 Enhancing Agricultural Sustainability and Food Security in Somalia: Addressing Climate Change Challenges
Authors: Ahmed A. Hassan
Abstract:
The agriculture industry in Somalia employs a large portion of the country's workforce. Somalia has been known for its production and notable agriculture for many years, the key sector that fuels the country's economy. Due to decades of civil conflict, poor administration, neglect, and a string of natural calamities, the Somali agricultural industry has suffered significant damage. The irrigation systems in Juba and Shabelle, the two major rivers, have failed and deteriorated. Crop output has decreased because of ongoing drought, poor agricultural techniques, desertification, and the exodus of rural people to neighboring nations. With pandemic levels of hunger and malnutrition brought on by climate change, Somalia has become one of the world's most food-insecure countries. Additionally, there is strong evidence that climate change, particularly in Somalia and other East African nations, has exacerbated civil wars across Africa. The El Nino/Southern Oscillation, which results in drier and warmer weather in tropical regions, may have contributed to numerous civil wars. Additionally, an increase in temperature is believed to raise the risk of internal armed conflict in sub-Saharan African nations. This paper examines Somalia's present extension programs, lists the challenges the nation's agricultural industry faces, and discusses the effects of climate change. Improvement measures are advised based on the analysis presented in the paper. This article's major goals are to highlight the serious challenges that Somali farmers face and to offer potential solutions for achieving sustainable agriculture and food security through the worst of climate change. Farmers, legislators, decision-makers, and academics may find the material in this article useful in developing credible plans, and policies, and in establishing research and extension programs. With improved extension systems, management, encouraging public investments, and an enabling climate, Somalia's agricultural industry can increase its resilience, the quality of life for its population, and the safety and added value of its goods. Offshore and coastal fisheries can contribute more to sector growth and return to and surpass their amazing pre-war output and export levels.
Keywords: Sustainable agriculture, innovation, land use, climate change, farm management, drought management, resilience, agri-business, agri-extension, farmer field schools, agricultural development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10122 Emergence of Fluoroquinolone Resistance in Pigs, Nigeria
Authors: Igbakura I. Luga, Alex A. Adikwu
Abstract:
A comparison of resistance to quinolones was carried out on isolates of Shiga toxin-producing Escherichia coliO157:H7 from cattle and mecA and nuc genes harbouring Staphylococcus aureus from pigs. The isolates were separately tested in the first and current decades of the 21st century. The objective was to demonstrate the dissemination of resistance to this frontline class of antibiotic by bacteria from food animals and bring to the limelight the spread of antibiotic resistance in Nigeria. A total of 10 isolates of the E. coli O157:H7 and 9 of mecA and nuc genes harbouring S. aureus were obtained following isolation, biochemical testing, and serological identification using the Remel Wellcolex E. coli O157:H7 test. Shiga toxin-production screening in the E. coli O157:H7 using the verotoxin E. coli reverse passive latex agglutination (VTEC-RPLA) test; and molecular identification of the mecA and nuc genes in S. aureus. Detection of the mecA and nuc genes were carried out using the protocol by the Danish Technical University (DTU) using the following primers mecA-1:5'-GGGATCATAGCGTCATTATTC-3', mecA-2: 5'-AACGATTGTGACACGATAGCC-3', nuc-1: 5'-TCAGCAAATGCATCACAAACAG-3', nuc-2: 5'-CGTAAATGCACTTGCTTCAGG-3' for the mecA and nuc genes, respectively. The nuc genes confirm the S. aureus isolates and the mecA genes as being methicillin-resistant and so pathogenic to man. The fluoroquinolones used in the antibiotic resistance testing were norfloxacin (10 µg) and ciprofloxacin (5 µg) in the E. coli O157:H7 isolates and ciprofloxacin (5 µg) in the S. aureus isolates. Susceptibility was tested using the disk diffusion method on Muller-Hinton agar. Fluoroquinolone resistance was not detected from isolates of E. coli O157:H7 from cattle. However, 44% (4/9) of the S. aureus were resistant to ciprofloxacin. Resistance of up to 44% in isolates of mecA and nuc genes harbouring S. aureus is a compelling evidence for the rapid spread of antibiotic resistance from bacteria in food animals from Nigeria. Ciprofloxacin is the drug of choice for the treatment of Typhoid fever, therefore widespread resistance to it in pathogenic bacteria is of great public health significance. The study concludes that antibiotic resistance in bacteria from food animals is on the increase in Nigeria. The National Food and Drug Administration and Control (NAFDAC) agency in Nigeria should implement the World Health Organization (WHO) global action plan on antimicrobial resistance. A good starting point can be coordinating the WHO, Office of International Epizootics (OIE), Food and Agricultural Organization (FAO) tripartite draft antimicrobial resistance monitoring and evaluation (M&E) framework in Nigeria.
Keywords: Fluoroquinolone, Nigeria, resistance, Staphylococcus aureus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101521 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank
Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain
Abstract:
This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.
Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63120 Collaborative Environmental Management: A Case Study Research of Stakeholders’ Collaboration in the Nigerian Oil-producing Region
Authors: Favour Makuochukwu Orji, Yingkui Zhao
Abstract:
A myriad of environmental issues face the Nigerian industrial region, resulting from; oil and gas production, mining, manufacturing and domestic wastes. Amidst these, much effort has been directed by stakeholders in the Nigerian oil producing regions, because of the impacts of the region on the wider Nigerian economy. Although collaborative environmental management has been noted as an effective approach in managing environmental issues, little attention has been given to the roles and practices of stakeholders in effecting a collaborative environmental management framework for the Nigerian oil-producing region. This paper produces a framework to expand and deepen knowledge relating to stakeholders aspects of collaborative roles in managing environmental issues in the Nigeria oil-producing region. The knowledge is derived from analysis of stakeholders’ practices – studied through multiple case studies using document analysis. Selected documents of key stakeholders – Nigerian government agencies, multi-national oil companies and host communities, were analyzed. Open and selective coding was employed manually during document analysis of data collected from the offices and websites of the stakeholders. The findings showed that the stakeholders have a range of roles, practices, interests, drivers and barriers regarding their collaborative roles in managing environmental issues. While they have interests for efficient resource use, compliance to standards, sharing of responsibilities, generating of new solutions, and shared objectives; there is evidence of major barriers and these include resource allocation, disjointed policy, ineffective monitoring, diverse socio- economic interests, lack of stakeholders’ commitment and limited knowledge sharing. However, host communities hold deep concerns over the collaborative roles of stakeholders for economic interests, particularly, where government agencies and multi-national oil companies are involved. With these barriers and concerns, a genuine stakeholders’ collaboration is found to be limited, and as a result, optimal environmental management practices and policies have not been successfully implemented in the Nigeria oil-producing region. A framework is produced that describes practices that characterize collaborative environmental management might be employed to satisfy the stakeholders’ interests. The framework recommends critical factors, based on the findings, which may guide a collaborative environmental management in the oil producing regions. The recommendations are designed to re-define the practices of stakeholders in managing environmental issues in the oil producing regions, not as something wholly new, but as an approach essential for implementing a sustainable environmental policy. This research outcome may clarify areas for future research as well as to contribute to industry guidance in the area of collaborative environmental management.Keywords: Collaborative environmental management framework, document analysis, case studies, multinational oil companies, Nigerian oil-producing region, stakeholders analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249119 Microalbuminuria in Human Immunodeficiency Virus Infection and Acquired Immunodeficiency Syndrome
Authors: Sharan Badiger, Prema T. Akkasaligar, Patil LS, Manish Patel, Biradar MS
Abstract:
Human immunodeficiency virus infection and acquired immunodeficiency syndrome is a global pandemic with cases reporting from virtually every country and continues to be a common infection in developing country like India. Microalbuminuria is a manifestation of human immunodeficiency virus associated nephropathy. Therefore, microalbuminuria may be an early marker of human immunodeficiency virus associated nephropathy, and screening for its presence may be beneficial. A strikingly high prevalence of microalbuminuria among human immunodeficiency virus infected patients has been described in various studies. Risk factors for clinically significant proteinuria include African - American race, higher human immunodeficiency virus ribonucleic acid level and lower CD4 lymphocyte count. The cardiovascular risk factors of increased systolic blood pressure and increase fasting blood sugar level are strongly associated with microalbuminuria in human immunodeficiency virus patient. These results suggest that microalbuminuria may be a sign of current endothelial dysfunction and micro-vascular disease and there is substantial risk of future cardiovascular disease events. Positive contributing factors include early kidney disease such as human immunodeficiency virus associated nephropathy, a marker of end organ damage related to co morbidities of diabetes or hypertension, or more diffuse endothelial cells dysfunction. Nevertheless after adjustment for non human immunodeficiency virus factors, human immunodeficiency virus itself is a major risk factor. The presence of human immunodeficiency virus infection is independent risk to develop microalbuminuria in human immunodeficiency virus patient. Cardiovascular risk factors appeared to be stronger predictors of microalbuminuria than markers of human immunodeficiency virus severity person with human immunodeficiency virus infection and microalbuminuria therefore appear to potentially bear the burden of two separate damage related to known vascular end organ damage related to know vascular risk factors, and human immunodeficiency virus specific processes such as the direct viral infection of kidney cells.The higher prevalence of microalbuminuria among the human immunodeficiency virus infected could be harbinger of future increased risks of both kidney and cardiovascular disease. Further study defining the prognostic significance of microalbuminuria among human immunodeficiency virus infected persons will be essential. Microalbuminuria seems to be a predictor of cardiovascular disease in diabetic and non diabetic subjects, hence it can also be used for early detection of micro vascular disease in human immunodeficiency virus positive patients, thus can help to diagnose the disease at the earliest.Keywords: Acquired immunodeficiency syndrome, Human immunodeficiency virus, Microalbuminuria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192018 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.
Keywords: Digitalization, digital transformation, lean production, Industrie 4.0, value chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034