Search results for: Analytic Hierarchy Processing (AHP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4329

Search results for: Analytic Hierarchy Processing (AHP)

2859 Processing Big Data: An Approach Using Feature Selection

Authors: Nikat Parveen, M. Ananthi

Abstract:

Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.

Keywords: big data, key value, feature selection, retrieval, performance

Procedia PDF Downloads 319
2858 Verbal Working Memory in Sequential and Simultaneous Bilinguals: An Exploratory Study

Authors: Archana Rao R., Deepak P., Chayashree P. D., Darshan H. S.

Abstract:

Cognitive abilities in bilinguals have been widely studied over the last few decades. Bilingualism has been found to extensively facilitate the ability to store and manipulate information in Working Memory (WM). The mechanism of WM includes primary memory, attentional control, and secondary memory, each of which makes a contribution to WM. Many researches have been done in an attempt to measure WM capabilities through both verbal (phonological) and nonverbal tasks (visuospatial). Since there is a lot of speculations regarding the relationship between WM and bilingualism, further investigation is required to understand the nature of WM in bilinguals, i.e., with respect to sequential and simultaneous bilinguals. Hence the present study aimed to highlight the verbal working memory abilities in sequential and simultaneous bilinguals with respect to the processing and recall abilities of nouns and verbs. Two groups of bilinguals aged between 18-30 years were considered for the study. Group 1 consisted of 20 (10 males and 10 females) sequential bilinguals who had acquired L1 (Kannada) before the age of 3 and had exposure to L2 (English) for a period of 8-10 years. Group 2 consisted of 20 (10 males and 10 females) simultaneous bilinguals who have acquired both L1 and L2 before the age of 3. Working memory abilities were assessed using two tasks, and a set of stimuli which was presented in gradation of complexity and the stimuli was inclusive of frequent and infrequent nouns and verbs. The tasks involved the participants to judge the correctness of the sentence and simultaneously remember the last word of each sentence and the participants are instructed to recall the words at the end of each set. The results indicated no significant difference between sequential and simultaneous bilinguals in processing the nouns and verbs, and this could be attributed to the proficiency level of the participants in L1 and the alike cognitive abilities between the groups. And recall of nouns was better compared to verbs, maybe because of the complex argument structure involved in verbs. Similarly, authors found a frequency of occurrence of nouns and verbs also had an effect on WM abilities. The difference was also found across gradation due to the load imposed on the central executive function and phonological loop.

Keywords: bilinguals, nouns, verbs, working memory

Procedia PDF Downloads 110
2857 Comparison of Tribological and Mechanical Properties of White Metal Produced by Laser Cladding and Conventional Methods

Authors: Jae-Il Jeong, Hoon-Jae Park, Jung-Woo Cho, Yang-Gon Kim, Jin-Young Park, Joo-Young Oh, Si-Geun Choi, Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim

Abstract:

Bearing component has strongly required to decrease vibration and wear to achieve high durability and life time. In the industry field, bearing durability is improved by surface treatment on the bearing surface by centrifugal casting or gravity casting production method. However, this manufacturing method has caused problems such as long processing time, defect rate, and health harmful effect. To solve this problem, there is a laser cladding deposition treatment, which provides fast processing and food adhesion. Therefore, optimum conditions of white metal laser deposition should be studied to minimize bearing contact axis wear using laser cladding techniques. In this study, we deposit a soft white metal layer on SCM440, which is mainly used for shaft and bolt. On laser deposition process, the laser power and powder feed rate and laser head speed factors are controlled to find out the optimal conditions. We also measure hardness using micro Vickers, analyze FE-SEM (Field Emission Scanning Electron Microscope) and EDS (Energy Dispersive Spectroscopy) to study the mechanical properties and surface characteristics with various parameters change. Furthermore, this paper suggests the optimum condition of laser cladding deposition to apply in industrial fields. This work was supported by the Industrial Innovation Project of the Korea Evaluation Institute of Industrial Technology (KEIT) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (Research no. 10051653).

Keywords: laser deposition, bearing, white metal, mechanical properties

Procedia PDF Downloads 246
2856 Influence of Organizational Culture on Frequency of Disputes in Commercial Projects in Egypt: A Contractor’s Perspective

Authors: Omneya N. Mekhaimer, Elkhayam M. Dorra, A. Samer Ezeldin

Abstract:

Over the recent decades, studies on organizational culture have gained global attention in the business management literature, where it has been established that the cultural factors embedded in the organization have an implicit yet significant influence on the organization’s success. Unlike other industries, the construction industry is widely known to be operating in a dynamic and adversarial nature; considering the unique characteristics it denotes, thereby the level of disputes has propagated in the construction industry throughout the years. In the late 1990s, the International Council for Research and Innovation in Building and Construction (CIB) created a Task Group (TG-23), which later evolved in 2006 into a Working Commission W112, with a strategic objective to promote research in investigating the role and impact of culture in the construction industry worldwide. To that end, this paper aims to study the influence of organizational culture in the contractor’s organization on the frequency of disputes caused between the owner and the contractor that occur in commercial projects based in Egypt. This objective is achieved by using a quantitative approach through a survey questionnaire to explore the dominant cultural attributes that exist in the contractor’s organization based on the Competing Value Framework (CVF) theory, which classifies organizational culture into four main cultural types: (1) clan, (2) adhocracy, (3) market, and (4) hierarchy. Accordingly, the collected data are statistically analyzed using Statistical Package for Social Sciences (SPSS 28) software, whereby a correlation analysis using Pearson Correlation is carried out to assess the relationship between these variables and their statistical significance using the p-value. The results show that there is an influence of organizational culture attributes on the frequency of disputes whereby market culture is identified to be the most dominant organizational culture that is currently practiced in contractor’s organization, which consequently contributes to increasing the frequency of disputes in commercial projects. These findings suggest that alternative management practices should be adopted rather than the existing ones with an aim to minimize dispute occurrence.

Keywords: construction projects, correlation analysis, disputes, Egypt, organizational culture

Procedia PDF Downloads 83
2855 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology

Authors: Yonggu Jang, Jisong Ryu, Woosik Lee

Abstract:

The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.

Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities

Procedia PDF Downloads 46
2854 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 72
2853 A Sustainable Supplier Selection and Order Allocation Based on Manufacturing Processes and Product Tolerances: A Multi-Criteria Decision Making and Multi-Objective Optimization Approach

Authors: Ravi Patel, Krishna K. Krishnan

Abstract:

In global supply chains, appropriate and sustainable suppliers play a vital role in supply chain development and feasibility. In a larger organization with huge number of suppliers, it is necessary to divide suppliers based on their past history of quality and delivery of each product category. Since performance of any organization widely depends on their suppliers, well evaluated selection criteria and decision-making models lead to improved supplier assessment and development. In this paper, SCOR® performance evaluation approach and ISO standards are used to determine selection criteria for better utilization of supplier assessment by using hybrid model of Analytic Hierchchy Problem (AHP) and Fuzzy Techniques for Order Preference by Similarity to Ideal Solution (FTOPSIS). AHP is used to determine the global weightage of criteria which helps TOPSIS to get supplier score by using triangular fuzzy set theory. Both qualitative and quantitative criteria are taken into consideration for the proposed model. In addition, a multi-product and multi-time period model is selected for order allocation. The optimization model integrates multi-objective integer linear programming (MOILP) for order allocation and a hybrid approach for supplier selection. The proposed MOILP model optimizes order allocation based on manufacturing process and product tolerances as per manufacturer’s requirement for quality product. The integrated model and solution approach are tested to find optimized solutions for different scenario. The detailed analysis shows the superiority of proposed model over other solutions which considered individual decision making models.

Keywords: AHP, fuzzy set theory, multi-criteria decision making, multi-objective integer linear programming, TOPSIS

Procedia PDF Downloads 153
2852 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 22
2851 Intelligent Process and Model Applied for E-Learning Systems

Authors: Mafawez Alharbi, Mahdi Jemmali

Abstract:

E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.

Keywords: artificial intelligence, architecture, e-learning, software engineering, processing

Procedia PDF Downloads 173
2850 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 64
2849 Good Practices for Model Structure Development and Managing Structural Uncertainty in Decision Making

Authors: Hossein Afzali

Abstract:

Increasingly, decision analytic models are used to inform decisions about whether or not to publicly fund new health technologies. It is well noted that the accuracy of model predictions is strongly influenced by the appropriateness of model structuring. However, there is relatively inadequate methodological guidance surrounding this issue in guidelines developed by national funding bodies such as the Australian Pharmaceutical Benefits Advisory Committee (PBAC) and The National Institute for Health and Care Excellence (NICE) in the UK. This presentation aims to discuss issues around model structuring within decision making with a focus on (1) the need for a transparent and evidence-based model structuring process to inform the most appropriate set of structural aspects as the base case analysis; (2) the need to characterise structural uncertainty (If there exist alternative plausible structural assumptions (or judgements), there is a need to appropriately characterise the related structural uncertainty). The presentation will provide an opportunity to share ideas and experiences on how the guidelines developed by national funding bodies address the above issues and identify areas for further improvements. First, a review and analysis of the literature and guidelines developed by PBAC and NICE will be provided. Then, it will be discussed how the issues around model structuring (including structural uncertainty) are not handled and justified in a systematic way within the decision-making process, its potential impact on the quality of public funding decisions, and how it should be presented in submissions to national funding bodies. This presentation represents a contribution to the good modelling practice within the decision-making process. Although the presentation focuses on the PBAC and NICE guidelines, the discussion can be applied more widely to many other national funding bodies that use economic evaluation to inform funding decisions but do not transparently address model structuring issues e.g. the Medical Services Advisory Committee (MSAC) in Australia or the Canadian Agency for Drugs and Technologies in Health.

Keywords: decision-making process, economic evaluation, good modelling practice, structural uncertainty

Procedia PDF Downloads 168
2848 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables

Authors: Marianna Maiaru, Gregory M. Odegard

Abstract:

During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.

Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling

Procedia PDF Downloads 77
2847 The Difference of Serum Tnf-α Levels between Patients Schizophrenic Male with Smoking and Healthy Control

Authors: Rona Hanani Simamora, Bahagia Loebis, M. Surya Husada

Abstract:

Background: The exact cause of schizophrenia is not known, although several etiology theories have been proposed for the disease, including immune dysfunction or autoimmune mechanisms. Cytokines including Tnf-α has an important role in the pathophysiology of schizophrenia and the effects of pharmacological treatment with antipsychotics. Nicotine is widespread effects on the brain, immune system and cytokine levels. Smoking among schizophrenic patients could play a role in the altered cytokine profiles of schizophrenia such as Tnf-α. Aims: To determine differences of serum Tnf-α levels between schizophrenic patients with smoking in male and healthy control. Methods: This study was a comparative analytic study, divided into two groups: 1) group of male schizophrenic patients with smoking (n1=30) with inclusion criteria were patients who have been diagnosed schizophrenic based PPDGJ-III, 20-60 years old, male, smoking, chronic schizophrenic patients in the stable phase and willing to participate this study. Exclusion criteria were having other mental disorders and comorbidity with other medical illnesses. 2) healthy control group (n2=30) with inclusion criteria were 20-60 years old, male, smoking, willing to participate this study. Exclusion criteria were having mental disorder, a family history of psychiatric disorders, the other medical illnesses, a history of alcohol and other substances abuse (except caffeine and nicotine). Serum Tnf-α were analyzed using the Quantikine HS Human Tnf –α Immunoassay. Results: Serum Tnf-α level measure in patient schizophrenia male with smoking and compared with the healthy control subjects. Tnf-α levels were significantly higher in patients schizophrenic male with smoking (25,79±27,96) to healthy control subjects (2,74±2,19), by using the Mann Whitney U test showed a statistically significant difference was observed for serum Tnf-α level (p < 0,001). Conclusions: Schizophrenia is a highly heterogeneous disorder, and this study shows an increase Tnf-α as pro-inflammation cytokines in schizophrenics. These results suggest an immune abnormalities may be involved in the etiology and pathophysiology of schizophrenia.

Keywords: male, schizophrenic, smoking, Tnf Alpha

Procedia PDF Downloads 232
2846 Bactericidal Efficacy of Quaternary Ammonium Compound on Carriers with Food Additive Grade Calcium Hydroxide against Salmonella Infantis and Escherichia coli

Authors: M. Shahin Alam, Satoru Takahashi, Mariko Itoh, Miyuki Komura, Mayuko Suzuki, Natthanan Sangsriratanakul, Kazuaki Takehara

Abstract:

Cleaning and disinfection are key components of routine biosecurity in livestock farming and food processing industry. The usage of suitable disinfectants and their proper concentration are important factors for a successful biosecurity program. Disinfectants have optimum bactericidal and virucidal efficacies at temperatures above 20°C, but very few studies on application and effectiveness of disinfectants at low temperatures have been done. In the present study, the bactericidal efficacies of food additive grade calcium hydroxide (FdCa(OH)), quaternary ammonium compound (QAC) and their mixture, were investigated under different conditions, including time, organic materials (fetal bovine serum: FBS) and temperature, either in suspension or in carrier test. Salmonella Infantis and Escherichia coli, which are the most prevalent gram negative bacteria in commercial poultry housing and food processing industry, were used in this study. Initially, we evaluated these disinfectants at two different temperatures (4°C and room temperature (RT) (25°C ± 2°C)) and 7 contact times (0, 5 and 30 sec, 1, 3, 20 and 30 min), with suspension tests either in the presence or absence of 5% FBS. Secondly, we investigated the bactericidal efficacies of these disinfectants by carrier tests (rubber, stainless steel and plastic) at same temperatures and 4 contact times (30 sec, 1, 3, and 5 min). Then, we compared the bactericidal efficacies of each disinfectant within their mixtures, as follows. When QAC was diluted with redistilled water (dW2) at 1: 500 (QACx500) to obtain the final concentration of didecyl-dimethylammonium chloride (DDAC) of 200 ppm, it could inactivate Salmonella Infantis within 5 sec at RT either with or without 5% FBS in suspension test; however, at 4°C it required 30 min in presence of 5% FBS. FdCa(OH)2 solution alone could inactivate bacteria within 1 min both at RT and 4°C even with 5% FBS. While FdCa(OH)2 powder was added at final concentration 0.2% to QACx500 (Mix500), the mixture could inactivate bacteria within 30 sec and 5 sec, respectively, with or without 5% FBS at 4°C. The findings from the suspension test indicated that low temperature inhibited the bactericidal efficacy of QAC, whereas Mix500 was effective, regardless of short contact time and low temperature, even with 5% FBS. In the carrier test, single disinfectant required bit more time to inactivate bacteria on rubber and plastic surfaces than on stainless steel. However, Mix500 could inactivate S. Infantis on rubber, stainless steel and plastic surfaces within 30 sec and 1 min, respectively, at RT and 4°C; but, for E. coli, it required only 30 sec at both temperatures. So, synergistic effects were observed on different carriers at both temperatures. For a successful enhancement of biosecurity during winter, the disinfectants should be selected that could have short contact times with optimum efficacy against the target pathogen. The present study findings help farmers to make proper strategies for application of disinfectants in their livestock farming and food processing industry.

Keywords: carrier, food additive grade calcium hydroxide (FdCa(OH)₂), quaternary ammonium compound, synergistic effects

Procedia PDF Downloads 281
2845 Building Atmospheric Moisture Diagnostics: Environmental Monitoring and Data Collection

Authors: Paula Lopez-Arce, Hector Altamirano, Dimitrios Rovas, James Berry, Bryan Hindle, Steven Hodgson

Abstract:

Efficient mould remediation and accurate moisture diagnostics leading to condensation and mould growth in dwellings are largely untapped. Number of factors are contributing to the rising trend of excessive moisture in homes mainly linked with modern living, increased levels of occupation and rising fuel costs, as well as making homes more energy efficient. Environmental monitoring by means of data collection though loggers sensors and survey forms has been performed in a range of buildings from different UK regions. Air and surface temperature and relative humidity values of residential areas affected by condensation and/or mould issues were recorded. Additional measurements were taken through different trials changing type, location, and position of loggers. In some instances, IR thermal images and ventilation rates have also been acquired. Results have been interpreted together with environmental key parameters by processing and connecting data from loggers and survey questionnaires, both in buildings with and without moisture issues. Monitoring exercises carried out during Winter and Spring time show the importance of developing and following accurate protocols for guidance to obtain consistent, repeatable and comparable results and to improve the performance of environmental monitoring. A model and a protocol are being developed to build a diagnostic tool with the goal of performing a simple but precise residential atmospheric moisture diagnostics to distinguish the cause entailing condensation and mould generation, i.e., ventilation, insulation or heating systems issue. This research shows the relevance of monitoring and processing environmental data to assign moisture risk levels and determine the origin of condensation or mould when dealing with a building atmospheric moisture excess.

Keywords: environmental monitoring, atmospheric moisture, protocols, mould

Procedia PDF Downloads 125
2844 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 101
2843 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 123
2842 Risk Assessment on Construction Management with “Fuzzy Logy“

Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi

Abstract:

Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.

Keywords: fuzzy logy, risk, prioritization, assessment

Procedia PDF Downloads 573
2841 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models

Authors: Ozan Kahraman, Hao Feng

Abstract:

Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.

Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7

Procedia PDF Downloads 348
2840 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 331
2839 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 63
2838 Cost-Effectiveness Analysis of the Use of COBLATION™ Knee Chondroplasty versus Mechanical Debridement in German Patients

Authors: Ayoade Adeyemi, Leo Nherera, Paul Trueman, Antje Emmermann

Abstract:

Background and objectives: Radiofrequency (RF) generated plasma chondroplasty is considered a promising treatment alternative to mechanical debridement (MD) with a shaver. The aim of the study was to perform a cost-effectiveness analysis comparing costs and outcomes following COBLATION chondroplasty versus mechanical debridement in patients with knee pain associated with a medial meniscus tear and idiopathic ICRS grade III focal lesion of the medial femoral condyle from a payer perspective. Methods: A decision-analytic model was developed comparing economic and clinical outcomes between the two treatment options in German patients following knee chondroplasty. Revision rates based on the frequency of repeat arthroscopy, osteotomy and conversion to total knee replacement, reimbursement costs and outcomes data over a 4-year time horizon were extracted from published literature. One-way sensitivity analyses were conducted to assess uncertainties around model parameters. Threshold analysis determined the revision rate at which model results change. All costs were reported in 2016 euros, future costs were discounted at a 3% annual rate. Results: Over a 4 year period, COBLATION chondroplasty resulted in an overall net saving cost of €461 due to a lower revision rate of 14% compared to 48% with MD. Threshold analysis showed that both options were associated with comparable costs if COBLATION revision rate was assumed to increase up to 23%. The initial procedure costs for COBLATION were higher compared to MD and outcome scores were significantly improved at 1 and 4 years post-operation versus MD. Conclusion: The analysis shows that COBLATION chondroplasty is a cost-effective option compared to mechanical debridement in the treatment of patients with a medial meniscus tear and idiopathic ICRS grade III defect of the medial femoral condyle.

Keywords: COBLATION, cost-effectiveness, knee chondroplasty, mechanical debridement

Procedia PDF Downloads 373
2837 Friction Stir Processing of the AA7075T7352 Aluminum Alloy Microstructures Mechanical Properties and Texture Characteristics

Authors: Roopchand Tandon, Zaheer Khan Yusufzai, R. Manna, R. K. Mandal

Abstract:

Present work describes microstructures, mechanical properties, and texture characteristics of the friction stir processed AA7075T7352 aluminum alloy. Phases were analyzed with the help of x-ray diffractometre (XRD), transmission electron microscope (TEM) along with the differential scanning calorimeter (DSC). Depth-wise microstructures and dislocation characteristics from the nugget-zone of the friction stir processed specimens were studied using the bright field (BF) and weak beam dark-field (WBDF) TEM micrographs, and variation in the microstructures as well as dislocation characteristics were the noteworthy features found. XRD analysis display changes in the chemistry as well as size of the phases in the nugget and heat affected zones (Nugget and HAZ). Whereas the base metal (BM) microstructures remain un-affected. High density dislocations were noticed in the nugget regions of the processed specimen, along with the formation of dislocation contours and tangles. .The ɳ’ and ɳ phases, along with the GP-Zones were completely dissolved and trapped by the dislocations. Such an observations got corroborated to the improved mechanical as well as stress corrosion cracking (SCC) performances. Bulk texture and residual stress measurements were done by the Panalytical Empyrean MRD system with Co- kα radiation. Nugget zone (NZ) display compressive residual stress as compared to thermo-mechanically(TM) and heat affected zones (HAZ). Typical f.c.c. deformation texture components (e.g. Copper, Brass, and Goss) were seen. Such a phenomenon is attributed to the enhanced hardening as well as other mechanical performance of the alloy. Mechanical characterizations were done using the tensile test and Anton Paar Instrumented Micro Hardness tester. Enhancement in the yield strength value is reported from the 89MPa to the 170MPa; on the other hand, highest hardness value was reported in the nugget-zone of the processed specimens.

Keywords: aluminum alloy, mechanical characterization, texture characterstics, friction stir processing

Procedia PDF Downloads 80
2836 Identification of Suitable Rainwater Harvesting Sites Using Geospatial Techniques with AHP in Chacha Watershed, Jemma Sub-Basin Upper Blue Nile, Ethiopia

Authors: Abrha Ybeyn Gebremedhn, Yitea Seneshaw Getahun, Alebachew Shumye Moges, Fikrey Tesfay

Abstract:

Rainfed agriculture in Ethiopia has failed to produce enough food, to achieve the increasing demand for food. Pinpointing the appropriate site for rainwater harvesting (RWH) have a substantial contribution to increasing the available water and enhancing agricultural productivity. The current study related to the identification of the potential RWH sites was conducted at the Chacha watershed central highlands of Ethiopia which is endowed with rugged topography. The Geographic Information System with Analytical Hierarchy Process was used to generate the different maps for identifying appropriate sites for RWH. In this study, 11 factors that determine the RWH locations including slope, soil texture, runoff depth, land cover type, annual average rainfall, drainage density, lineament intensity, hydrologic soil group, antecedent moisture content, and distance to the roads were considered. The overall analyzed result shows that 10.50%, 71.10%, 17.90%, and 0.50% of the areas were found under highly, moderately, marginally suitable, and unsuitable areas for RWH, respectively. The RWH site selection was found highly dependent on a slope, soil texture, and runoff depth; moderately dependent on drainage density, annual average rainfall, and land use land cover; but less dependent on the other factors. The highly suitable areas for rainwater harvesting expansion are lands having a flat topography with a soil textural class of high-water holding capacity that can produce high runoff depth. The application of this study could be a baseline for planners and decision-makers and support any strategy adoption for appropriate RWH site selection.

Keywords: runoff depth, antecedent moisture condition, AHP, weighted overlay, water resource

Procedia PDF Downloads 33
2835 Detecting Hate Speech And Cyberbullying Using Natural Language Processing

Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão

Abstract:

Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.

Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning

Procedia PDF Downloads 207
2834 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 372
2833 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products

Authors: Maciej Jedrzejczyk, Karolina Marzantowicz

Abstract:

Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.

Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids

Procedia PDF Downloads 280
2832 Community Development and Empowerment

Authors: Shahin Marjan Nanaje

Abstract:

The present century is the time that social worker faced complicated issues in the area of their work. All the focus are on bringing change in the life of those that they live in margin or live in poverty became the cause that we have forgotten to look at ourselves and start to bring change in the way we address issues. It seems that there is new area of needs that social worker should response to that. In need of dialogue and collaboration, to address the issues and needs of community both individually and as a group we need to have new method of dialogue as tools to reach to collaboration. The social worker as link between community, organization and government play multiple roles. They need to focus in the area of communication with new ability, to transfer all the narration of the community to those organization and government and vice versa. It is not relate only in language but it is about changing dialogue. Migration for survival by job seeker to the big cities created its own issues and difficulty and therefore created new need. Collaboration is not only requiring between government sector and non-government sectors but also it could be in new way between government, non-government and communities. To reach to this collaboration we need healthy, productive and meaningful dialogue. In this new collaboration there will not be any hierarchy between members. The methodology that selected by researcher were focusing on observation at the first place, and used questionnaire in the second place. Duration of the research was three months and included home visits, group discussion and using communal narrations which helped to bring enough evidence to understand real need of community. The sample selected randomly was included 70 immigrant families which work as sweepers in the slum community in Bangalore, Karnataka. The result reveals that there is a gap between what a community is and what organizations, government and members of society apart from this community think about them. Consequently, it is learnt that to supply any service or bring any change to slum community, we need to apply new skill to have dialogue and understand each other before providing any services. Also to bring change in the life of those marginal groups at large we need to have collaboration as their challenges are collective and need to address by different group and collaboration will be necessary. The outcome of research helped researcher to see the area of need for new method of dialogue and collaboration as well as a framework for collaboration and dialogue that were main focus of the paper. The researcher used observation experience out of ten NGO’s and their activities to create framework for dialogue and collaboration.

Keywords: collaboration, dialogue, community development, empowerment

Procedia PDF Downloads 568
2831 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU

Authors: David Lloyd, David Owen, Martin Jardine

Abstract:

Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.

Keywords: legislation, overfill, profile, waste

Procedia PDF Downloads 386
2830 Managing Risks of Civil War: Accounting Practices in Egyptian Households

Authors: Sumohon Matilal, Neveen Abdelrehim

Abstract:

The purpose of this study is to examine the way households manage the risks of civil war, using the calculative practices of accounting as a lens. As is the case with other social phenomena, accounting serves as a conduit for attributing values and rationales to crisis and in the process makes it visible and calculable. Our focus, in particular, is on the dialogue facilitated by the numerical logic of accounting between the householder and a crisis scenario, such as civil war. In other words, we seek to study how the risk of war is rationalized through household budgets, income and expenditure statements etc. and how such accounting constructs in turn shape attitudes toward earnings and spending in a wartime economy. The existing literature on war and accounting demonstrates how an accounting logic can have potentially destabilising consequences and how it is used to legitimise war. However, very few scholars have looked at the way accounting constructs are used to internalise the effects of war in an average household and the behavioural consequences that arise from such accounting. Relatedly, scholars studying household accounting have mostly focussed on the links between gender and hierarchy in relation to managing the financial affairs. Few have focused on the role of household accounts in a crisis scenario. This study intends to fill this gap. We draw upon Egypt, a country in the middle of civil war since 2011 for our purpose. We intend to carry out 15-20 semi-structured interviews with middle income households in Cairo that maintain some form of accounts to study the following issues: 1. How do people internalise the risks of civil war? What kind of accounting constructs do they use (this may take the form of simple budgets, income-expenditure notes/statements on a periodic basis, spreadsheets etc.) 2. How has civil war affected household expenditure? Are people spending more/less than before? 3. How has civil war affected household income? Are people finding it difficult/easy to survive on the pre-war income? 4. How is such accounting affecting household behaviour towards earnings and expenditure? Are families prioritising expenditure on necessities alone? Are they refraining from indulging in luxuries? Are family members doing two or three jobs to cope with difficult times? Are families increasingly turning toward borrowing? Is credit available? From whom?

Keywords: risk, accounting, war, crisis

Procedia PDF Downloads 188