Search results for: data utilization
26081 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 16826080 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 8726079 Knowledge Diffusion via Automated Organizational Cartography (Autocart)
Authors: Mounir Kehal
Abstract:
The post-globalization epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behavior has come to provide the competitive and comparative edge. Enterprises have turned to explicit - and even conceptualizing on tacit - knowledge management to elaborate a systematic approach to develop and sustain the intellectual capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualized. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper, we present an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 30926078 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi
Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale
Abstract:
Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.Keywords: health care, out-patient department, queuing model, sensitivity analysis
Procedia PDF Downloads 43526077 Motherhood Practices and Symbolic Capital: A Study of Teen Mothers in Northeastern Thailand
Authors: Ampai Muensit, Maniemai Thongyou, Patcharin Lapanun
Abstract:
Teen mothers have been viewed as ‘a powerless’ facing numerous pressures including poverty, immaturity of motherhood, and especially social blame.This paper argues that, to endure as an agent, they keep struggling to overcome all difficulties in their everyday life by using certain symbols to negotiate the situations they encounter, and to obtain a social position without surrendering to the dominating socio-cultural structure. Guided by Bourdieu’s theory of practice, this study looks at how teen mothers use symbolic capital in their motherhood practices. Although motherhood practices can be found in different contexts with various types of capital utilization, this paper focuses on the use of symbolic capitals in teen mothers’ practices within the contexts of the community. The study employs a qualitative methodology; data was collected from 12 informants through life history, in-depth interview, observation and the content analytical method was employed for data analysis. The findings show that child and motherhood were key symbolic capitals in motherhood practices. Employing such capitals teen mothers can achieve an acceptance from community – particularly from the new community. These symbolic capitals were the important sources of teen mothers’ power to turn the tide by changing their status – from “the powerless” to be “the agent”. The use of symbolic capitals also related to habitus of teen mothers in better compromising for an appropriate social position.Keywords: teen mother, motherhood practice, symbolic capital, community
Procedia PDF Downloads 26726076 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20626075 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 12626074 Screening and Optimization of Pretreatments for Rice Straw and Their Utilization for Bioethanol Production Using Developed Yeast Strain
Authors: Ganesh Dattatraya Saratale, Min Kyu Oh
Abstract:
Rice straw is one of the most abundant lignocellulosic waste materials and its annual production is about 731 Mt in the world. This study treats the subject of effective utilization of this waste biomass for biofuels production. We have showed a comparative assessment of numerous pretreatment strategies for rice straw, comprising of major physical, chemical and physicochemical methods. Among the different methods employed for pretreatment alkaline pretreatment in combination with sodium chlorite/acetic acid delignification found efficient pretreatment with significant improvement in the enzymatic digestibility of rice straw. A cellulase dose of 20 filter paper units (FPU) released a maximum 63.21 g/L of reducing sugar with 94.45% hydrolysis yield and 64.64% glucose yield from rice straw, respectively. The effects of different pretreatment methods on biomass structure and complexity were investigated by FTIR, XRD and SEM analytical techniques. Finally the enzymatic hydrolysate of rice straw was used for ethanol production using developed Saccharomyces cerevisiae SR8. The developed yeast strain enabled efficient fermentation of xylose and glucose and produced higher ethanol production. Thus development of bioethanol production from lignocellulosic waste biomass is generic, applicable methodology and have great implication for using ‘green raw materials’ and producing ‘green products’ much needed today.Keywords: rice straw, pretreatment, enzymatic hydrolysis, FPU, Saccharomyces cerevisiae SR8, ethanol fermentation
Procedia PDF Downloads 53826073 Efficient Neural and Fuzzy Models for the Identification of Dynamical Systems
Authors: Aouiche Abdelaziz, Soudani Mouhamed Salah, Aouiche El Moundhe
Abstract:
The present paper addresses the utilization of Artificial Neural Networks (ANNs) and Fuzzy Inference Systems (FISs) for the identification and control of dynamical systems with some degree of uncertainty. Because ANNs and FISs have an inherent ability to approximate functions and to adapt to changes in input and parameters, they can be used to control systems too complex for linear controllers. In this work, we show how ANNs and FISs can be put in order to form nets that can learn from external data. In sequence, it is presented structures of inputs that can be used along with ANNs and FISs to model non-linear systems. Four systems were used to test the identification and control of the structures proposed. The results show the ANNs and FISs (Back Propagation Algorithm) used were efficient in modeling and controlling the non-linear plants.Keywords: non-linear systems, fuzzy set Models, neural network, control law
Procedia PDF Downloads 21226072 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48426071 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System
Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi
Abstract:
Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.Keywords: RFID, asset tracking system, MongoDB, NoSQL
Procedia PDF Downloads 30626070 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 3326069 Public Environmental Investment Analysis of Japan
Authors: K. Y. Chen, H. Chua, C. W. Kan
Abstract:
Japan is a well-developed country but the environmental issues are still a hot issue. In this study, we will analyse how the environmental investment affects the sustainable development in Japan. This paper will first describe the environmental policy of Japan and the effort input by the Japan government. Then, we will collect the yearly environmental data and also information about the environmental investment. Based on the data collected, we try to figure out the relationship between environmental investment and sustainable development in Japan. In addition, we will analyse the SWOT of environmental investment in Japan. Based on the economic information collected, Japan established a sound material-cycle society through changes in business and life styles. A comprehensive legal system for this kind of society was established in Japan. In addition, other supporting measures, such as financial measures, utilization of economic instruments, implementation of research and promotion of education and science and technology, help Japan to cope with the recent environmental challenges. Japan’s excellent environmental technologies changed its socioeconomic system. They are at the highest global standards. This can be reflected by the number of patents registered in Japan which has been on the steady growth. Country by country comparison in the application for patents on environmental technologies also indicates that Japan ranks high in such areas as atmospheric pollution and water quality management, solid waste management and renewable energy. This is a result of the large expenditure invested on research and development.Keywords: Japan, environmental investment, sustainable development, analysis
Procedia PDF Downloads 26826068 Effect of Ginger (Zingiber Officinale) And Garlic (Allium Sativum) Mixture on Growth Performance, Feed Utilization and Survival of Clarias Gariepinus Fingerlings
Authors: Maryam I. Abdullahi, Suleiman Aliyu, Armaya'u Hamisu Bichi
Abstract:
The study was conducted at the University Fish Farm, Federal University Dutsinma. The aim of the study was to determine the effects of dietary supplementation of Allium sativum and Zingiber officinale mixture on growth performance, feed utilization and survival of C. gariepinus fingerling reared in tank system. The experimental setup comprised of four treatment (4) groups labeled as T1, T2, T3 and T4, each treatment replicated 3 times with ten (10) fingerlings in each replicate respectively. Treatment 1 contained 0.5% of Zingiber officinale and 0.5% of Allium sativum (ZO-AS: 1.0%), Treatment 2 contained 0.75% Zingiber officinale, and 0.75% garlic (ZO-AS: 1.5%) while T3 contained 1% ginger and 1% Allium sativum (ZO-AS: 2.0%) respectively. The experiment lasted for twelve (12) weeks (84 days). The survival rate ranges from 90% - 100%. With a higher Final Mean Weight (893.10) and Percentage Mean Weight (942.65) as compared to the control group and others. There was no significant difference (p > 0.05) in the FMW (893.10) of the fish fed 1.5g/kg of Garlic and Ginger diets than the control (687.00). The SGR (1.20) of fish-fed Zingiber officinale and Allium sativum fortified diets shows that there is no significant difference between treatments fed 1.5g/kg Zingiber officinale and Allium sativum and the control group. Generally, there was an increased survival rate in the experimental fish-fed Zingiber officinale and Allium sativum-supplemented diets as compared to the control.Keywords: clarias gariepinus, zingiber officinale, allium sativum, fingerlings
Procedia PDF Downloads 6826067 Fight against Money Laundering with Optical Character Recognition
Authors: Saikiran Subbagari, Avinash Malladhi
Abstract:
Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition
Procedia PDF Downloads 14426066 Efficient Utilization of Unmanned Aerial Vehicle (UAV) for Fishing through Surveillance for Fishermen
Authors: T. Ahilan, V. Aswin Adityan, S. Kailash
Abstract:
UAV’s are small remote operated or automated aerial surveillance systems without a human pilot aboard. UAV’s generally finds its use in military and special operation application, a recent growing trend in UAV’s finds its application in several civil and non military works such as inspection of power or pipelines. The objective of this paper is the augmentation of a UAV in order to replace the existing expensive sonar (sound navigation and ranging) based equipment amongst small scale fisherman, for whom access to sonar equipment are restricted due to limited economic resources. The surveillance equipment’s present in the UAV will relay data and GPS location onto a receiver on the fishing boat using RF signals, using which the location of the schools of fishes can be found. In addition to this, an emergency beacon system is present for rescue operations and drone recovery.Keywords: UAV, Surveillance, RF signals, fishing, sonar, GPS, video stream, school of fish
Procedia PDF Downloads 45726065 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 57426064 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework
Authors: Lutful Karim, Mohammed S. Al-kahtani
Abstract:
Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.Keywords: big data, clustering, tree topology, data aggregation, sensor networks
Procedia PDF Downloads 34626063 Knowledge Diffusion via Automated Organizational Cartography: Autocart
Authors: Mounir Kehal, Adel Al Araifi
Abstract:
The post-globalisation epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behaviour has come to provide the competitive and comparative edge. Enterprises have turned to explicit- and even conceptualising on tacit- Knowledge Management to elaborate a systematic approach to develop and sustain the Intellectual Capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualised. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper we present likewise an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 41726062 The Relationship between Knowledge Management Processes and Strategic Thinking at the Organization Level
Authors: Bahman Ghaderi, Hedayat Hosseini, Parviz Kafche
Abstract:
The role of knowledge management processes in achieving the strategic goals of organizations is crucial. To this end, understanding the relationship between knowledge management processes and different aspects of strategic thinking (followed by long-term organizational planning) should be considered. This research examines the relationship between each of the five knowledge management processes (creation, storage, transfer, audit, and deployment) with each dimension of strategic thinking (vision, creativity, thinking, communication and analysis) in one of the major sectors of the food industry in Iran. In this research, knowledge management and its dimensions (knowledge acquisition, knowledge storage, knowledge transfer, knowledge auditing, and finally knowledge utilization) as independent variables and strategic thinking and its dimensions (creativity, systematic thinking, vision, strategic analysis, and strategic communication) are considered as the dependent variable. The statistical population of this study consisted of 245 managers and employees of Minoo Food Industrial Group in Tehran. In this study, a simple random sampling method was used, and data were collected by a questionnaire designed by the research team. Data were analyzed using SPSS 21 software. LISERL software is also used for calculating and drawing models and graphs. Among the factors investigated in the present study, knowledge storage with 0.78 had the most effect, and knowledge transfer with 0.62 had the least effect on knowledge management and thus on strategic thinking.Keywords: knowledge management, strategic thinking, knowledge management processes, food industry
Procedia PDF Downloads 17026061 Analysis of Strategies to Reduce Patients’ Disposition Holding Time from Emergency Department to Ward
Authors: Kamonwat Suksumek, Seeronk Prichanont
Abstract:
Access block refers to the situation where Emergency Department (ED) patients requiring hospital admission spend an unreasonable holding time in an ED because their access to a ward is blocked by the full utilization of the ward’s beds. Not only it delays the proper treatments required by the patients, but access block is also the cause of ED’s overcrowding. Clearly, access block is an inter-departmental problem that needs to be brought to management’s attention. This paper focuses on the analysis of strategies to address the access block problem, both in the operational and intermediate levels. These strategies were analyzed through a simulation model with a real data set from a university hospital in Thailand. The paper suggests suitable variable levels for each strategy so that the management will make the final decisions.Keywords: access block, emergency department, health system analysis, simulation
Procedia PDF Downloads 40926060 Iron Influx, Its Root-Shoot Relations and Utilization Efficiency in Wheat
Authors: Abdul Malik Dawlatzai, Shafiqullah Rahmani
Abstract:
Plant cultivars of the same species differ in their Fe efficiency. This paper studied the Fe influx and root-shoot relations of Fe at different growth stages in wheat. The four wheat cultivars (HD 2967, PDW 233, PBW 550 and PDW 291) were grown in pots in Badam Bagh agricultural researching farm, Kabul under two Fe treatments: (i) 0 mg Fe kg⁻¹ soil (soil with 2.7 mg kg⁻¹ of DTPA-extractable Fe) and (ii) 50 mg Fe kg⁻¹ soil. Root length (RL), shoot dry matter (SDM), Fe uptake, and soil parameters were measured at tillering and anthesis. Application of Fe significantly increased RL, root surface area, SDM, and Fe uptake in all wheat cultivars. Under Fe deficiency, wheat cv. HD 2967 produced 90% of its maximum RL and 75% of its maximum SDM. However, PDW 233 produced only 69% and 60%, respectively. Wheat cultivars HD 2967, and PDW 233 exhibited the highest and lowest value of root surface area and Fe uptake, respectively. The concentration difference in soil solution Fe between bulk soil and root surface (ΔCL) was maximum in wheat cultivar HD 2967, followed by PBW 550, PDW 291, and PDW 233. More depletion at the root surface causes steeper concentration gradients, which result in a high influx and transport of Fe towards root. Fe influx in all the wheat cultivars increased with the Fe application, but the increase was maximum, i.e., 4 times in HD 2967 and minimum, i.e., 2.8 times in PDW 233. It can be concluded that wheat cultivars HD 2967 and PBW 550 efficiently utilized Fe as compared to other cultivars. Additionally, iron efficiency of wheat cultivars depends upon uptake of each root segment, i.e., the influx, which in turn depends on depletion of Fe in the rhizosphere during vegetative phase and higher utilization efficiency of acquired Fe during reproductive phase that governs the ultimate grain yield.Keywords: Fe efficiency, Fe influx, Fe uptake, Rhizosphere
Procedia PDF Downloads 13226059 Wireless Information Transfer Management and Case Study of a Fire Alarm System in a Residential Building
Authors: Mohsen Azarmjoo, Mehdi Mehdizadeh Koupaei, Maryam Mehdizadeh Koupaei, Asghar Mahdlouei Azar
Abstract:
The increasing prevalence of wireless networks in our daily lives has made them indispensable. The aim of this research is to investigate the management of information transfer in wireless networks and the integration of renewable solar energy resources in a residential building. The focus is on the transmission of electricity and information through wireless networks, as well as the utilization of sensors and wireless fire alarm systems. The research employs a descriptive approach to examine the transmission of electricity and information on a wireless network with electric and optical telephone lines. It also investigates the transmission of signals from sensors and wireless fire alarm systems via radio waves. The methodology includes a detailed analysis of security, comfort conditions, and costs related to the utilization of wireless networks and renewable solar energy resources. The study reveals that it is feasible to transmit electricity on a network cable using two pairs of network cables without the need for separate power cabling. Additionally, the integration of renewable solar energy systems in residential buildings can reduce dependence on traditional energy carriers. The use of sensors and wireless remote information processing can enhance the safety and efficiency of energy usage in buildings and the surrounding spaces.Keywords: renewable energy, intelligentization, wireless sensors, fire alarm system
Procedia PDF Downloads 5426058 Bakla Po Ako (I Am Gay): A Case Study on the Communication Styles of Selected Filipino Gays in Disclosing Their Sexual Orientation to Their Parents
Authors: Bryan Christian Baybay, M. Francesca Ronario
Abstract:
This study is intended to answer the question “What are the communication styles of selected Filipino gays in breaking their silence on their sexual orientation to their parents?” In this regard, six cases of Filipino gay disclosures were examined through in-depth interviews. The participants were selected through purposive sampling and snowball technique. The theories, Rhetorical Sensitivity of Roderick Hart and Communicator Style of Robert Norton were used to analyze the gathered data and to give support to the communication attitudes, message processing, message rendering and communication styles exhibited in each disclosure. As secondary data and validation, parents and experts in the field of communication, sociology, and psychology were also interviewed and consulted. The study found that Filipino gays vary in the communication styles they use during the disclosure with their parents. All communication styles: impression-leaving, contentious, open, dramatic, dominant, precise, relaxed, friendly, animated, and communicator image were observed by the gays depending on their motivation, relationship and thoughts contemplated. These results lend ideas for future researchers to look into the communication patterns and/or styles of lesbians, bisexuals, transgenders and queers or expand researches on the same subject and the utilization of Social Judgment and Relational Dialectics theories in determining and analyzing LGBTQ communication.Keywords: communication attitudes, communication styles, Filipino gays, self-disclosure, sexual orientation
Procedia PDF Downloads 52326057 Studies of Lactose Utilization in Microalgal Isolate for Further Use in Dairy By-Product Bioconversion
Authors: Sergejs Kolesovs, Armands Vigants
Abstract:
The use of dairy industry by-products and wastewater as a cheap substrate for microalgal growth is gaining recognition. However, the mechanisms of lactose utilization remain understudied, limiting the potential of successful microalgal biomass production using various dairy by-products, such as whey and permeate. The necessity for microalgae to produce a specific enzyme, β-galactosidase, requires the selection of suitable strains. This study focuses on a freshwater microalgal isolate's ability to grow on a semi-synthetic medium supplemented with lactose. After 10 days of agitated cultivation, an axenic microalgal isolate achieved significantly higher biomass production under mixotrophic growth conditions (0.86 ± 0.07 g/L, dry weight) than heterotrophic growth (0.46 ± 0.04 g/L). Moreover, mixotrophic cultivation had significantly higher biomass production compared to photoautotrophic growth (0.67 ± 0.05 g/L). The activity of β-galactosidase was detected in both supernatant and microalgal biomass under mixotrophic and heterotrophic growth conditions, showing the potential of extracellular and intracellular mechanisms of enzyme production. However, the main limiting factor in this study was the increase of pH values during the cultivation, significantly reducing the activity of the β-galactosidase enzyme after 3rd day of cultivation. It highlights the need for stricter control of growth parameters to ensure the enzyme's activity. Further research will assess the isolate's suitability for dairy by-product bioconversion and biomass composition.Keywords: microalgae, lactose, whey, permeate, beta-galactosidase, mixotrophy, heterotrophy
Procedia PDF Downloads 6526056 Resource Orchestration Based on Two-Sides Scheduling in Computing Network Control Sytems
Authors: Li Guo, Jianhong Wang, Dian Huang, Shengzhong Feng
Abstract:
Computing networks as a new network architecture has shown great promise in boosting the utilization of different resources, such as computing, caching, and communications. To maximise the efficiency of resource orchestration in computing network control systems (CNCSs), this work proposes a dynamic orchestration strategy of a different resource based on task requirements from computing power requestors (CPRs). Specifically, computing power providers (CPPs) in CNCSs could share information with each other through communication channels on the basis of blockchain technology, especially their current idle resources. This dynamic process is modeled as a cooperative game in which CPPs have the same target of maximising long-term rewards by improving the resource utilization ratio. Meanwhile, the task requirements from CPRs, including size, deadline, and calculation, are simultaneously considered in this paper. According to task requirements, the proposed orchestration strategy could schedule the best-fitting resource in CNCSs, achieving the maximum long-term rewards of CPPs and the best quality of experience (QoE) of CRRs at the same time. Based on the EdgeCloudSim simulation platform, the efficiency of the proposed strategy is achieved from both sides of CPRs and CPPs. Besides, experimental results show that the proposed strategy outperforms the other comparisons in all cases.Keywords: computing network control systems, resource orchestration, dynamic scheduling, blockchain, cooperative game
Procedia PDF Downloads 11426055 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6026054 Analysis of Technical Efficiency and Its Determinants among Cattle Fattening Enterprises in Kebbi State, Nigeria
Authors: Gona Ayuba, Isiaka Mohammed, Kotom Mohammed Baba, Mohammed Aabubakar Maikasuwa
Abstract:
The study examined the technical efficiency and its determinants of cattle fattening enterprises in Kebbi state, Nigeria. Data were collected from a sample of 160 fatteners between June 2010 and June 2011 using the multistage random sampling technique. Translog stochastic frontier production function was employed for the analysis. Results of the analysis show that technical efficiency indices varied from 0.74 to 0.98%, with a mean of 0.90%, indicating that there was no wide gap between the efficiency of best technical efficient fatteners and that of the average fattener. The result also showed that fattening experience and herd size influenced the level of technical efficiency at 1% levels. It is recommended that credit agencies should ensure that credit made available to the fatteners is monitored to ensure appropriate utilization.Keywords: technical efficiency, determinants, cattle, fattening enterprises
Procedia PDF Downloads 45126053 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks
Authors: Khalid Ali, Manar Jammal
Abstract:
In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity
Procedia PDF Downloads 22626052 To Ensure Maximum Voter Privacy in E-Voting Using Blockchain, Convolutional Neural Network, and Quantum Key Distribution
Authors: Bhaumik Tyagi, Mandeep Kaur, Kanika Singla
Abstract:
The advancement of blockchain has facilitated scholars to remodel e-voting systems for future generations. Server-side attacks like SQL injection attacks and DOS attacks are the most common attacks nowadays, where malicious codes are injected into the system through user input fields by illicit users, which leads to data leakage in the worst scenarios. Besides, quantum attacks are also there which manipulate the transactional data. In order to deal with all the above-mentioned attacks, integration of blockchain, convolutional neural network (CNN), and Quantum Key Distribution is done in this very research. The utilization of blockchain technology in e-voting applications is not a novel concept. But privacy and security issues are still there in a public and private blockchains. To solve this, the use of a hybrid blockchain is done in this research. This research proposed cryptographic signatures and blockchain algorithms to validate the origin and integrity of the votes. The convolutional neural network (CNN), a normalized version of the multilayer perceptron, is also applied in the system to analyze visual descriptions upon registration in a direction to enhance the privacy of voters and the e-voting system. Quantum Key Distribution is being implemented in order to secure a blockchain-based e-voting system from quantum attacks using quantum algorithms. Implementation of e-voting blockchain D-app and providing a proposed solution for the privacy of voters in e-voting using Blockchain, CNN, and Quantum Key Distribution is done.Keywords: hybrid blockchain, secure e-voting system, convolutional neural networks, quantum key distribution, one-time pad
Procedia PDF Downloads 94