Search results for: classification of matter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3877

Search results for: classification of matter

3187 Contemporary Matter on Communication and Information Education: Technological Lack

Authors: Sedat Cereci

Abstract:

This study investigates character of communication, evaluates communication and information need of people, handles relation between communication and contemporary technology, and emphasizes technological lack on communication education in many societies. To get information and communication are of main needs of people and people developed different instruments and technics to learn and to communicate in the past. Because of social need, communication became social matter and governments contributed facilities of communication and set communication places for people to meet and to communicate. Industrial Revolution and technological developments also contributed communication technics and proved numerous technological facilities for communication. Education in the world also use developed technology in any department and communication education especially necessities high technological facilities in schools. Many high schools and universities have communication departments and most of them use contemporary technological facilities, but they are not sufficient. Communication departments in educational organizations in Turkey have computer classrooms, monitors, cameras, microphones, telephones, different softwares, and others. However, despite all this, technological facilities and teaching methods are not sufficient because of contemporary developments. Technology develops rapidly due to hopes of people and technological facilities in education cannot catch developments and people always hope more.

Keywords: information, communication education, technology, technological lack, contemporary conditions, technics

Procedia PDF Downloads 316
3186 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 288
3185 The International Classification of Functioning, Disability and Health (ICF) as a Problem-Solving Tool in Disability Rehabilitation and Education Alliance in Metabolic Disorders (DREAM) at Sultan Bin Abdul Aziz Humanitarian City:A Prototype for Reh

Authors: Hamzeh Awad

Abstract:

Disability is considered to be a worldwide complex phenomenon which rising at a phenomenal rate and caused by many different factors. Chronic diseases such as cardiovascular disease and diabetes can lead to mobility disability in particular and disability in general. The ICF is an integrative bio-psycho-social model of functioning and disability and considered by the World Health Organization (WHO) to be a reference for disability classification using its categories and core set to classify disorder’s functional limitations. Specialist programs at Sultan Bin Abdul Aziz Humanitarian City (SBAHC) are providing both inpatient and outpatient services have started to implement the ICF and use it as a problem solving tool in Rehab. Diabetes is leading contributing factor for disability and considered epidemic in several Gulf countries including the Kingdom of Saudi Arabia (KSA), where its prevalence continues to increase dramatically. Metabolic disorders, mainly diabetes are not well covered in Rehab field. The purpose of this study is present to research and clinical rehabilitation field of DREAM and ICF as a framework in clinical and research setting in Rehab service. Also, shed the light on using the ICF as problem solving tool at SBAHC. There are synergies between disability causes and wider public health priorities in relation to both chronic disease and disability prevention. Therefore, there is a need for strong advocacy and understanding of the role of ICF as a reference in Rehab settings in Middle East if we wish to seize the opportunity to reverse current trends of acquired disability in the region.

Keywords: international classification of functioning, disability and health (ICF), prototype, rehabilitation and diabetes

Procedia PDF Downloads 349
3184 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 438
3183 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality

Authors: Heichia Wang, Yalan Chao

Abstract:

Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.

Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network

Procedia PDF Downloads 127
3182 Yield and Sward Composition Responses of Natural Grasslands to Treatments Meeting Sustainability

Authors: D. Díaz Fernández, I. Csízi, K. Pető, G. Nagy

Abstract:

An outstanding part of the animal products are based on the grasslands, due to the fact that the grassland ecosystems can be found all over the globe. In places where economical and successful crop production cannot be managed, the grassland based animal husbandry can be an efficient way of food production. In addition, these ecosystems have an important role in carbon sequestration, and with their rich flora – and fauna connected to it – in conservation of biodiversity. The protection of nature, and the sustainable agriculture is getting more and more attention in the European Union, but, looking at the consumers’ needs, the production of healthy food cannot be neglected either. Because of these facts, the effects of two specific composts - which are officially authorized in organic farming, in Agri-environment Schemes and Natura 2000 programs – on grass yields and sward compositions were investigated in a field trial. The investigation took place in Hungary, on a natural grassland based on solonetz soil. Three rates of compost (10 t/ha, 20 t/ha, 30 t/ha) were tested on 3 m X 10 m experimental plots. Every treatment had four replications and both type of compost had four-four control plots too, this way 32 experimental plots were included in the investigations. The yield of the pasture was harvested two-times (in May and in September) and before cutting the plots, measurements on botanical compositions were made. Samples for laboratory analysis were also taken. Dry matter yield of pasture showed positive responses to the rates of composts. The increase in dry matter yield was partly due to some positive changes in sward composition. It means that the proportions of grass species with higher yield potential increased in ground cover of the sward without depressing out valuable native species of diverse natural grasslands. The research results indicate that the use of organic compost can be an efficient way to increase grass yields in a sustainable way.

Keywords: compost application, dry matter yield, native grassland, sward composition

Procedia PDF Downloads 248
3181 Probabilistic Crash Prediction and Prevention of Vehicle Crash

Authors: Lavanya Annadi, Fahimeh Jafari

Abstract:

Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.

Keywords: road safety, crash prediction, exploratory analysis, machine learning

Procedia PDF Downloads 109
3180 Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning

Authors: Yasmine Abu Adla, Racha Soubra, Milana Kasab, Mohamad O. Diab, Aly Chkeir

Abstract:

Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals, out of which 11 were chosen based on their intraclass correlation coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, 5 features were introduced to the linear discriminant analysis classifier, and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90%, respectively.

Keywords: Doppler radar system, stand-to-sit phase, TUG test, machine learning, classification

Procedia PDF Downloads 160
3179 Automatic Post Stroke Detection from Computed Tomography Images

Authors: C. Gopi Jinimole, A. Harsha

Abstract:

For detecting strokes, Computed Tomography (CT) scan is preferred for imaging the abnormalities or infarction in the brain. Because of the problems in the window settings used to evaluate brain CT images, they are very poor in the early stage infarction detection. This paper presents an automatic estimation method for the window settings of the CT images for proper contrast of the hyper infarction present in the brain. In the proposed work the window width is estimated automatically for each slice and the window centre is changed to a new value of 31HU, which is the average of the HU values of the grey matter and white matter in the brain. The automatic window width estimation is based on the average of median of statistical central moments. Thus with the new suggested window centre and estimated window width, the hyper infarction or post-stroke regions in CT brain images are properly detected. The proposed approach assists the radiologists in CT evaluation for early quantitative signs of delayed stroke, which leads to severe hemorrhage in the future can be prevented by providing timely medication to the patients.

Keywords: computed tomography (CT), hyper infarction or post stroke region, Hounsefield Unit (HU), window centre (WC), window width (WW)

Procedia PDF Downloads 202
3178 Using Machine Learning to Predict Answers to Big-Five Personality Questions

Authors: Aadityaa Singla

Abstract:

The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.

Keywords: machine learning, personally, big five personality traits, cognitive science

Procedia PDF Downloads 144
3177 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 78
3176 The Increasing of Unconfined Compression Strength of Clay Soils Stabilized with Cement

Authors: Ali̇ Si̇nan Soğanci

Abstract:

The cement stabilization is one of the ground improvement method applied worldwide to increase the strength of clayey soils. The using of cement has got lots of advantages compared to other stabilization methods. Cement stabilization can be done quickly, the cost is low and creates a more durable structure with the soil. Cement can be used in the treatment of a wide variety of soils. The best results of the cement stabilization were seen on silts as well as coarse-grained soils. In this study, blocks of clay were taken from the Apa-Hotamış conveyance channel route which is 125km long will be built in Konya that take the water with 70m3/sec from Mavi tunnel to Hotamış storage. Firstly, the index properties of clay samples were determined according to the Unified Soil Classification System. The experimental program was carried out on compacted soil specimens with 0%, 7 %, 15% and 30 % cement additives and the results of unconfined compression strength were discussed. The results of unconfined compression tests indicated an increase in strength with increasing cement content.

Keywords: cement stabilization, unconfined compression test, clayey soils, unified soil classification system.

Procedia PDF Downloads 420
3175 Classification of Health Information Needs of Hypertensive Patients in the Online Health Community Based on Content Analysis

Authors: Aijing Luo, Zirui Xin, Yifeng Yuan

Abstract:

Background: With the rapid development of the online health community, more and more patients or families are seeking health information on the Internet. Objective: This study aimed to discuss how to fully reveal the health information needs expressed by hypertensive patients in their questions in the online environment. Methods: This study randomly selected 1,000 text records from the question data of hypertensive patients from 2008 to 2018 collected from the website www.haodf.com and constructed a classification system through literature research and content analysis. This paper identified the background characteristics and questioning the intention of each hypertensive patient based on the patient’s question and used co-occurrence network analysis to explore the features of the health information needs of hypertensive patients. Results: The classification system for health information needs of patients with hypertension is composed of 9 parts: 355 kinds of drugs, 395 kinds of symptoms and signs, 545 kinds of tests and examinations , 526 kinds of demographic data, 80 kinds of diseases, 37 kinds of risk factors, 43 kinds of emotions, 6 kinds of lifestyles, 49 kinds of questions. The characteristics of the explored online health information needs of the hypertensive patients include: i)more than 49% of patients describe the features such as drugs, symptoms and signs, tests and examinations, demographic data, diseases, etc. ii) these groups are most concerned about treatment (77.8%), followed by diagnosis (32.3%); iii) 65.8% of hypertensive patients will ask doctors online several questions at the same time. 28.3% of the patients are very concerned about how to adjust the medication, and they will ask other treatment-related questions at the same time, including drug side effects, whether to take drugs, how to treat a disease, etc.; secondly, 17.6% of the patients will consult the doctors online about the causes of the clinical findings, including the relationship between the clinical findings and a disease, the treatment of a disease, medication, and examinations. Conclusion: In the online environment, the health information needs expressed by Chinese hypertensive patients to doctors are personalized; that is, patients with different background features express their questioning intentions to doctors. The classification system constructed in this study can guide health information service providers in the construction of online health resources, to help solve the problem of information asymmetry in communication between doctors and patients.

Keywords: online health community, health information needs, hypertensive patients, doctor-patient communication

Procedia PDF Downloads 118
3174 Effect of Physicochemical Treatments on the Characteristics of Activated Sludge

Authors: Hammadi Larbi

Abstract:

The treatment of wastewater in sewage plants usually results in the formation of a large amount of sludge. These appear at the outlet of the treatment plant as a viscous fluid loaded with a high concentration of dry matter. This sludge production presents environmental, ecological, and economic risks. That is why it is necessary to find many solutions for minimizing these risks. In the present article, the effect of hydrogen peroxide, thermal treatment, and quicklime on the characteristics of the activated sludge produced in urban wastewater plant were evaluated in order to avoid any risk in the plants. The study shows increasing of the dose of H2O2 from 0 to 0.4 g causes an increase in the solubilization rate of COD from 12% to 45% and a reduction in the organic matter content of sludge (VM/SM) from 74% to 36% . The results also show that the optimum efficiency of the heat treatment corresponds to a temperature of 80 ° C for a treatment time of 40 min is 47% and 51.82% for a temperature equal to 100 ° C and 76.30 % for a temperature of 120 ° C, and 79.38% for a temperature of 140 ° C. The treatment of sludge by quicklime gives the optimum efficiency of 70.62 %. It was shown the increasing of the temperature from 80°C to 140°C, the pH of sludge was increased from 7.12 to 9.59. The obtained results showed that with increasing the dose of quicklime from 0 g/l to 1g/l in activated sludge led to an increase of their pH from 7.12 to 12.06. The study shows the increasing the dose of quicklime from 0 g/l to 1g/l causes also an increase in the solubilization of COD from 0% to 70.62 %

Keywords: activated sludge, hydrogen peroxide, thermal treatment, quicklime

Procedia PDF Downloads 102
3173 Early-Warning Lights Classification Management System for Industrial Parks in Taiwan

Authors: Yu-Min Chang, Kuo-Sheng Tsai, Hung-Te Tsai, Chia-Hsin Li

Abstract:

This paper presents the early-warning lights classification management system for industrial parks promoted by the Taiwan Environmental Protection Administration (EPA) since 2011, including the definition of each early-warning light, objectives, action program and accomplishments. All of the 151 industrial parks in Taiwan were classified into four early-warning lights, including red, orange, yellow and green, for carrying out respective pollution management according to the monitoring data of soil and groundwater quality, regulatory compliance, and regulatory listing of control site or remediation site. The Taiwan EPA set up a priority list for high potential polluted industrial parks and investigated their soil and groundwater qualities based on the results of the light classification and pollution potential assessment. In 2011-2013, there were 44 industrial parks selected and carried out different investigation, such as the early warning groundwater well networks establishment and pollution investigation/verification for the red and orange-light industrial parks and the environmental background survey for the yellow-light industrial parks. Among them, 22 industrial parks were newly or continuously confirmed that the concentrations of pollutants exceeded those in soil or groundwater pollution control standards. Thus, the further investigation, groundwater use restriction, listing of pollution control site or remediation site, and pollutant isolation measures were implemented by the local environmental protection and industry competent authorities; the early warning lights of those industrial parks were proposed to adjust up to orange or red-light. Up to the present, the preliminary positive effect of the soil and groundwater quality management system for industrial parks has been noticed in several aspects, such as environmental background information collection, early warning of pollution risk, pollution investigation and control, information integration and application, and inter-agency collaboration. Finally, the work and goal of self-initiated quality management of industrial parks will be carried out on the basis of the inter-agency collaboration by the classified lights system of early warning and management as well as the regular announcement of the status of each industrial park.

Keywords: industrial park, soil and groundwater quality management, early-warning lights classification, SOP for reporting and treatment of monitored abnormal events

Procedia PDF Downloads 325
3172 Urea Treatment of Low Dry Matter Oat Silage

Authors: Noor-ul-Ain, Muhammad Tahir Khan, Kashif Khan, Adeela Ajmal, Hamid Mustafa

Abstract:

The objective of this study was to evaluate the preservative and upgrading potential of urea (70g/kg DM) added to high moisture oat silage at laboratory scale trial and urea was hydrolysed 95%. Microbial activity measured by pH and volatile fatty acids (VFA) and lactate production was reduced (p<0.001) by the urea addition. The pH of oat silage (without treated) was measured 5.7 and increased up to 8.00 on average while; volatile fatty acids (VFA) concentration was decreased. Relative proportions of fermentation acids changed after urea addition, increasing the acetate and butyrate and decreasing the propionate and lactate proportions. The addition of urea to oat silages increased (P<0.001) water soluble and ammonium nitrogen of the forage. These nitrogen fractions represented more than 40% of total nitrogen. After urea addition, total nitrogen content of oat silages increased from 21.0 g/kg DM to 28 g/kg DM. Application of urea at a rate of 70 g/kg DM significantly increased (P<0.001) the in situ degradation of neutral-detergent fibre after 48h of rumen incubation (NDF-situ). The NDF-situ was 200 g/kg NDF higher on oat forages ensiled with urea than on oat forages ensiled without urea. Oat silages can be effectively preserved and upgraded by ensiling with 70 g urea/kg dry matter. Further studies are required to evaluate voluntary intake of this forage.

Keywords: oat, silage, urea, pH, forage

Procedia PDF Downloads 467
3171 Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese

Authors: Leonardo Alfredo Mendoza, Cristian Munoz, Marco Aurelio Pacheco, Manoela Kohler, Evelyn Batista, Rodrigo Moura

Abstract:

Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%.

Keywords: natural processing language, sentiment analysis, bidirectional long short-term memory, BI-LSTM, gated recurrent unit, GRU

Procedia PDF Downloads 158
3170 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis

Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana

Abstract:

Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.

Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis

Procedia PDF Downloads 125
3169 Special Plea That The Prosecutor Does Not Have Title To Prosecute

Authors: Wium de Villiers

Abstract:

Section 106(1)(h) of the South African Criminal Procedure Act 51 of 1977 provides that an accused may enter a special plea that the prosecutor does not have title to prosecute. In a seminal matter (S v Mousa 2021 2 SACR 378 (GJ)) regarding section 106(1)(h), certain interesting legal aspects emerged. The first aspect concerned the meaning of the term “prosecutor”. More specifically, the question arose whether the term included a prosecutor who was previously involved with the matter, as well as the relevant Deputy Director of Public Prosecutions (DDPP) who instituted the prosecution and oversaw the prosecution on behalf of the state. The meaning of the term “title”, and with regard to the conduct of the “prosecutor”, the term “abuse of process,” were also raised and decided. In the paper, the facts, and the arguments in, and the decisions of the court, are discussed critically. The author argue that the intended objection in section 106(1)(h) is not to cure the abuse inflicted by a previous prosecutor or by the DDPP. I point out that the term “title” includes a lack of authority, non-compliance with jurisdictional requirements or absence of locus standi. I also point out that an abuse of process takes place if the process is used for an improper, ulterior or collateral purpose. I also argue that the accused should, instead of relying on section 106(1)(h), have relied on the prior agreement and applied for a permanent stay of prosecution.

Keywords: special plea, prosecutor, title, abuse of process

Procedia PDF Downloads 57
3168 The Standard of Best Interest of the Child in Custody Adjudication under the Malaysian Laws

Authors: Roslina Che Soh

Abstract:

Best interest of the child has been the prevailing principle of the custody legislations of most nations in the world. The tremendous shift from parental rights to parental responsibilities throughout the centuries had made the principle of best interests of the child as the utmost matter which parents must uphold in child upbringing. Despite the commitment to this principle is significantly enshrined in the United Nation Convention on Rights of the Child, the content and application of the principle differs across borders. Differences persist notwithstanding many countries have experienced a substantial shift over the last several decades in the types of custodial arrangements that are thought to best serve children’s interests. The laws in Malaysia similarly uphold this principle but do not provide further deliberation on the principle itself. The principle is entirely developed by the courts through decided cases. Thus, this paper seeks to discuss the extent of the application of best interest of the child principle in custody disputes. In doing so, it attempts to provide an overview of the current laws and the approach of the Civil and the Shariah courts in Malaysia in applying the principle in determining custody disputes. For purposes of comparison, it briefly examines the legislations and the courts practices in Australia and England on this matter. The purpose is to determine the best standard to be adopted by Malaysia and to propose improvement to the laws whenever appropriate.

Keywords: child custody, best interest, Malaysian law, bioinformatics, biomedicine

Procedia PDF Downloads 273
3167 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 85
3166 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components

Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura

Abstract:

This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.

Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding

Procedia PDF Downloads 137
3165 An Application to Predict the Best Study Path for Information Technology Students in Learning Institutes

Authors: L. S. Chathurika

Abstract:

Early prediction of student performance is an important factor to be gained academic excellence. Whatever the study stream in secondary education, students lay the foundation for higher studies during the first year of their degree or diploma program in Sri Lanka. The information technology (IT) field has certain improvements in the education domain by selecting specialization areas to show the talents and skills of students. These specializations can be software engineering, network administration, database administration, multimedia design, etc. After completing the first-year, students attempt to select the best path by considering numerous factors. The purpose of this experiment is to predict the best study path using machine learning algorithms. Five classification algorithms: decision tree, support vector machine, artificial neural network, Naïve Bayes, and logistic regression are selected and tested. The support vector machine obtained the highest accuracy, 82.4%. Then affecting features are recognized to select the best study path.

Keywords: algorithm, classification, evaluation, features, testing, training

Procedia PDF Downloads 118
3164 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain

Authors: G. Hafner

Abstract:

A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.

Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency

Procedia PDF Downloads 403
3163 Issues in Translating Hadith Terminologies into English: A Critical Approach

Authors: Mohammed Riyas Pp

Abstract:

This study aimed at investigating major issues in translating the Arabic Hadith terminologies into English, focusing on choosing the most appropriate translation for each, reviewing major Hadith works in English. This study is confined to twenty terminologies with regard to classification of Hadith based on authority, strength, number of transmitters and connections in Isnad. Almost all available translations are collected and analyzed to find the most proper translation based on linguistic and translational values. To the researcher, many translations lack precise understanding of either Hadith terminologies or English language and varieties of methodologies have influence on varieties of translations. This study provides a classification of translational and conceptual issues. Translational issues are related to translatability of these terminologies and their equivalence. Conceptual issues provide a list of misunderstandings due to wrong translations of terminologies. This study ends with a suggestion for unification in translating terminologies based on convention of Muslim scholars having good understanding of Hadith terminologies and English language.

Keywords: english language, hadith terminologies, equivalence in translation, problems in translation

Procedia PDF Downloads 186
3162 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation

Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi

Abstract:

In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.

Keywords: biomass burning, photochemical chamber, particulate matter, tracers

Procedia PDF Downloads 192
3161 Diversity in Finance Literature Revealed through the Lens of Machine Learning: A Topic Modeling Approach on Academic Papers

Authors: Oumaima Lahmar

Abstract:

This paper aims to define a structured topography for finance researchers seeking to navigate the body of knowledge in their extrapolation of finance phenomena. To make sense of the body of knowledge in finance, a probabilistic topic modeling approach is applied on 6000 abstracts of academic articles published in three top journals in finance between 1976 and 2020. This approach combines both machine learning techniques and natural language processing to statistically identify the conjunctions between research articles and their shared topics described each by relevant keywords. The topic modeling analysis reveals 35 coherent topics that can well depict finance literature and provide a comprehensive structure for the ongoing research themes. Comparing the extracted topics to the Journal of Economic Literature (JEL) classification system, a significant similarity was highlighted between the characterizing keywords. On the other hand, we identify other topics that do not match the JEL classification despite being relevant in the finance literature.

Keywords: finance literature, textual analysis, topic modeling, perplexity

Procedia PDF Downloads 169
3160 The Impact of Garlic and Citrus Extracts on Energy Retention and Methane Production in Ruminants in vitro

Authors: Michael Graz, Natasha Hurril, Andrew Shearer

Abstract:

Research on feed supplementation with natural compounds is currently being intensively pursued with a view to improving energy utilisation in ruminants and mitigating the production of methane by these animals. Towards this end, a novel combination of extracts from garlic and bitter orange was therefore selected for trials on the basis of their previously published in vitro anti-methanogenic potential. Three separate in vitro experiments were conducted to determine energy utilisation and greenhouse gas production. These included use of rumen fluid from fistulated cows and sheep in batch culture, the Hohenheim gas test, and the Rusitec technique. Experimental and control arms were utilised, with 5g extracts per kilogram of total dietary dry matter (0.05g/kg active compounds) being used to supplement or not supplement the in vitro systems. Respiratory measurements were conducted on experimental day 1 for the batch culture and Hohenheim gas test and on day 14-21 for the Rusitec Technique (in a 21-day trial). Measurements included methane (CH4) production, total volatile fatty acid (VFA) concentration, molar proportions of acetate, propionate and butyrate and degradation of organic matter (Rusitec). CH4 production was reduced by 82% (±16%), 68% (±11%) and 37% (±4%) in the batch culture, Hohenheim gas test and Rusitec, respectively. Total VFA production was reduced by 13% (±2%) and 2% (±0.1%) in the batch culture and Hohenheim gas test whilst it was increased by 8% (±2%) in the Rusitec. Total VFA production was reduced in all tests between 2 and 10%, whilst acetate production was reduced between 10% and 29%. Propionate production which is an indicator of weight gain was increased in all cases between 16% and 30%. Butyrate production which is considered an indicator of potential milk yield was increased by between 6 and 11%. Degradation of organic matter in the Rusitec experiments was improved by 10% (±0.1%). In conclusion, the study demonstrated the potential of the combination of garlic and citrus extracts to improve digestion, enhance body energy retention and limit CH4 formation in relation to feed intake.

Keywords: citrus, garlic, methane, ruminants

Procedia PDF Downloads 328
3159 A Framework for Auditing Multilevel Models Using Explainability Methods

Authors: Debarati Bhaumik, Diptish Dey

Abstract:

Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.

Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics

Procedia PDF Downloads 90
3158 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization

Authors: Christoph Linse, Thomas Martinetz

Abstract:

Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.

Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets

Procedia PDF Downloads 87