Search results for: data driven and knowledge driven
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29202

Search results for: data driven and knowledge driven

28932 The Effect of Tacit Knowledge for Intelligence Cycle

Authors: Bahadir Aydin

Abstract:

It is difficult to access accurate knowledge because of mass data. This huge data make environment more and more caotic. Data are main piller of intelligence. The affiliation between intelligence and knowledge is quite significant to understand underlying truths. The data gathered from different sources can be modified, interpreted and classified by using intelligence cycle process. This process is applied in order to progress to wisdom as well as intelligence. Within this process the effect of tacit knowledge is crucial. Knowledge which is classified as explicit and tacit knowledge is the key element for any purpose. Tacit knowledge can be seen as "the tip of the iceberg”. This tacit knowledge accounts for much more than we guess in all intelligence cycle. If the concept of intelligence cycle is scrutinized, it can be seen that it contains risks, threats as well as success. The main purpose of all organizations is to be successful by eliminating risks and threats. Therefore, there is a need to connect or fuse existing information and the processes which can be used to develop it. Thanks to this process the decision-makers can be presented with a clear holistic understanding, as early as possible in the decision making process. Altering from the current traditional reactive approach to a proactive intelligence cycle approach would reduce extensive duplication of work in the organization. Applying new result-oriented cycle and tacit knowledge intelligence can be procured and utilized more effectively and timely.

Keywords: information, intelligence cycle, knowledge, tacit Knowledge

Procedia PDF Downloads 486
28931 Algorithms used in Spatial Data Mining GIS

Authors: Vahid Bairami Rad

Abstract:

Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.

Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining

Procedia PDF Downloads 425
28930 Innovation and Performance of Very Small Agri-Food Enterprises in Cameroon

Authors: Ahmed Moustapha Mfokeu

Abstract:

Agri-food VSEs in Cameroon are facing a succession of crises, lack of security, particularly in the Far North, South West, and North West regions, the consequences of the Covid 19 crisis, and the war in Ukraine . These multiple crises have benefited the reception of the prices of the raw materials. Moreover, the exacerbation of competitive pressures is driven by the technological acceleration of productive systems in emerging countries which increase the demands imposed on the markets. The Cameroonian VSE must therefore be able to meet the new challenges of international competition, especially through innovation. The objective of this research is to contribute to the knowledge of the effects of innovation on the performance of very small agribusinesses in Cameroon. On the methodological level, the data were provided from a sample of 153 companies in the cities of Douala and Yaoundé. This research uses structural equation models with latent variables. The main results show that there is a positive and significant link between innovation and the performance of very small agri-food companies, so if it is important for entrepreneurs to encourage and practice innovation, it is also necessary to make them understand and make them like this aspect in their strategic function.

Keywords: innovation, performance, very small enterprise, agrifood

Procedia PDF Downloads 73
28929 Acoustic Blood Plasmapheresis in Polymeric Resonators

Authors: Itziar Gonzalez, Pilar Carreras, Alberto Pinto, Roque Ruben Andres

Abstract:

Acoustophoretic separation of plasma from blood is based on a collection process of the blood cells, driven by an acoustic radiation force. The number of cells, their concentration, and the sample hydrodynamics are involved in these processes. However, their influence on the acoustic blood response has not yet been reported in the literature. Addressing it, this paper presents an experimental study of blood samples exposed to ultrasonic standing waves at different hematocrit levels and hydrodynamic conditions. The experiments were performed in a glass capillary (700µm-square cross section) actuated by a piezoelectric ceramic at 1MHz, hosting 2D orthogonal half-wavelength resonances transverse to the channel length, with a single-pressure-node along its central axis where cells collected driven by the acoustic radiation force. Four blood dilutions in PBS of 1:20, 1:10, 1:5, and 1:2 were tested at eight flow rate conditions Q=0:120µL/min. The 1:5 dilution (H=9%) demonstrated to be optimal for the plasmapheresis at any of the flow rates analyzed, requiring the shortest times to achieve plasma free of cells. The study opens new possibilities to optimize processes of plasmapheresis processes by ultrasounds at different hematocrit conditions in future personalized diagnoses/treatments involving blood samples.

Keywords: ultrasounds, microfluidics, flow rate, acoustophoresis, polymeric resonators

Procedia PDF Downloads 112
28928 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 55
28927 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 93
28926 Investigating the Interaction of Individuals' Knowledge Sharing Constructs

Authors: Eugene Okyere-Kwakye

Abstract:

Knowledge sharing is a practice where individuals commonly exchange both tacit and explicit knowledge to jointly create a new knowledge. Knowledge management literature vividly express that knowledge sharing is the keystone and perhaps it is the most important aspect of knowledge management. To enhance the understanding of knowledge sharing domain, this study is aimed to investigate some factors that could influence employee’s attitude and behaviour to share their knowledge. The researchers employed the social exchange theory as a theoretical foundation for this study. Three essential factors namely: Trust, mutual reciprocity and perceived enjoyment that could influence knowledge sharing behaviour has been incorporated into a research model. To empirically validate this model, data was collected from one hundred and twenty respondents. The multiple regression analysis was employed to analyse the data. The results indicate that perceived enjoyment and trust have a significant influence on knowledge sharing. Surprisingly, mutual reciprocity did not influence knowledge sharing. The paper concludes by highlight the practical implications of the findings and areas for future research to consider.

Keywords: perceived enjoyment, trust, knowledge sharing, knowledge management

Procedia PDF Downloads 408
28925 Women-Hating Masculinities: How the Demand for Prostitution Fuels Sex Trafficking

Authors: Rosa M. Senent

Abstract:

Over the centuries, prostitution has been problematized from many sides, with women always at the center of the debate. However, prostitution is a gendered, demand-driven phenomenon. Thus, a focus must be put on the men who demand it, as an increasing number of studies have been done in the last few decades. The purpose of this paper is to expose how men's discourse online reveals the link between their demand for paid sex in prostitution and sex trafficking. The methodological tool employed was Critical Discourse Analysis (CDA). A critical analysis of sex buyers' discourse online showed that online communities of sex buyers are a useful tool in researching their behavior towards women, that their knowledge of sex trafficking and exploitation do not work as a deterrent for them to buy sex, and that the type of masculinity that sex buyers endorse is characterized by attitudes linked to the perpetuation of violence against women.

Keywords: masculinities, prostitution, sex trafficking, violence

Procedia PDF Downloads 112
28924 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence

Authors: Garry Gorman, Nigel McKelvey, James Connolly

Abstract:

This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.

Keywords: computer science education, artificial intelligence, growth mindset, pedagogy

Procedia PDF Downloads 60
28923 The Social Enterprise Model And Its Beneficiaries

Authors: Lorryn Williams

Abstract:

This study will explore how the introduction of the for-profit social enterprise model affects the real lives of the individuals and communities that this model aims to help in South Africa. The congruence between organisational need construction and the real needs of beneficiaries, and whether the adoption of a profit driven model, such as social entrepreneurship, supports or discards these needs is key to answering the former question. By making use of qualitative methods, the study aims to collect empirical evidence that either supports the social entrepreneurship approach when compared to other programs such as vocational training programs or rejects it as less beneficial. It is the objective of this research to provide an answer to the question of whether the social enterprise model of conducting charity leaves the beneficiaries of non-profit organisations in a generally better or worse off position. The study will specifically explore the underlying assumptions the social entrepreneurship model makes, since the assumptions made concerning the uplifting effects it has on its beneficiaries may produce either real or assumed change for beneficiaries. The meaning of social cohesion and social capital for these organisations, the construction of beneficiary dependence and independence, the consideration of formal and informal economies beneficiaries engage in, and the extent to which sustainability is used as a brand, will be investigated. Through engaging the relevant literature, experts in the field of non-profit donorship and need implementation, organisations who have both adopted social enterprise programs and not, and most importantly, the beneficiaries themselves, it will be possible to provide answers to questions this study aims to answer.

Keywords: social enterprise, beneficiaries, profit driven model, non-profit organizations

Procedia PDF Downloads 102
28922 Enterprise Risk Management, Human Capital and Organizational Performance: Insights from Public Listed Companies

Authors: Omar Moafaq Saleh Aljanabi, Noradiva Hamzah, Ruhanita Maelah

Abstract:

In today’s challenging global economy, which is driven by information and knowledge, risk management is undergoing a great change, as organizations shift from traditional and compartmental risk management to an enterprise-wide approach. Enterprise risk management (ERM), which aims at increasing the sustainability of an organization and achieving competitive advantage, is gaining global attention and fast becoming an essential concern in all industries. Furthermore, in order to be effective, ERM should be managed by managers with high-level skills and knowledge. Despite the importance of the knowledge embedded in, there remains a paucity of evidence concerning how human capital could influence the organization’s ERM. Responses from 116 public listed companies (PLCs) on the main market of Bursa Malaysia were analyzed using Structural Equation Modelling (SEM). This study found that there is a significant association between ERM and organizational performance. The results also indicate that human capital has a positive moderating effect on the relationship between ERM and performance. The study contributes to the ERM literature by providing empirical evidence on the relationship between ERM, human capital, and organizational performance. Findings from this study also provide guidelines for managers, policy makers, and the regulatory bodies, to evaluate the ERM practices in PLCs.

Keywords: enterprise risk management, human capital, organizational performance, Malaysian public listed companies

Procedia PDF Downloads 157
28921 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 107
28920 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 102
28919 Change in Value System: The Way Forward for Africa

Authors: Awe Ayodeji Samson, Adeuja Yetunde Omowunmi

Abstract:

Corruption is a ‘monster’ that can consume a whole nation, continent and even the world if it is not destroyed while it is still immature; It grows in the mind of the people, takes over their thinking and guides their decision-making process. Corruption snowballs into socio-economic catastrophe that might be difficult to deal with. Corruption which is a disease of the mind can be alleviated in Africa and the world at large by transforming a Corruption-Prone Mind to a Corruption-Immune Mind and to achieve this, we have to change our value system because the use of anti-graft agencies alone is not enough. Therefore, we have to fight corruption from the inside and the outside. Value System is the principle of right and wrong that are accepted by an individual or a social group; the reviewing and reordering of our value system is the solution to the problem of corruption as proposed by this research because the African society has become a ‘Money and Power Driven Society’ where the ‘I am worth concept’ which is a problematic concept has created an ‘Aggressive Society’ with grasping and money-grabbing individuals. We place more priority on money and the display of opulence. Hence, this has led to a ‘Triangular Society’ where minority is lavishing in plenty and majority is gasping for little. The get rich quick syndrome, the ethnicity syndrome, weakened educational system are signs of the prevalence of corruption in Africa This research has analyzed role and impact of the change in our value system in the fight against corruption in Africa and has therefore proposed the change in our value system as the way forward in the fight against corruption in Africa.

Keywords: corruption-prone mind, corruption-immune mind, triangular society, aggressive society, money and power-driven society

Procedia PDF Downloads 279
28918 Modeling and Analysis of Solar Assisted Adsorption Cooling System Using TRNSYS

Authors: M. Wajahat, M. Shoaib, A. Waheed

Abstract:

As a result of increase in world energy demand as well as the demand for heating, refrigeration and air conditioning, energy engineers are now more inclined towards the renewable energy especially solar based thermal driven refrigeration and air conditioning systems. This research is emphasized on solar assisted adsorption refrigeration system to provide comfort conditions for a building in Islamabad. The adsorption chiller can be driven by low grade heat at low temperature range (50 -80 °C) which is lower than that required for generator in absorption refrigeration system which may be furnished with the help of common flat plate solar collectors (FPC). The aim is to offset the total energy required for building’s heating and cooling demand by using FPC’s thus reducing dependency on primary energy source hence saving energy. TRNSYS is a dynamic modeling and simulation tool which can be utilized to simulate the working of a complete solar based adsorption chiller to meet the desired cooling and heating demand during summer and winter seasons, respectively. Modeling and detailed parametric analysis of the whole system is to be carried out to determine the optimal system configuration keeping in view various design constraints. Main focus of the study is on solar thermal loop of the adsorption chiller to reduce the contribution from the auxiliary devices.

Keywords: flat plate collector, energy saving, solar assisted adsorption chiller, TRNSYS

Procedia PDF Downloads 617
28917 Innovation and Economic Growth Model of East Asian Countries: The Adaptability of the Model in Ethiopia

Authors: Khalid Yousuf Ahmed

Abstract:

At the beginning of growth period, East Asian countries achieved impressive economic growth for the decades. They transformed from agricultural economy toward industrialization and contributed to dynamic structural transformation. The achievements were driven by government-led development policies that implemented effective innovation policy to boost technological capability of local firms. Recently, most Sub-Saharan African have been showing sustainable growth. Exceptionally, Ethiopia has been recording double-digit growth for a decade. Hence, Ethiopia has claimed to follow the footstep of East Asia development model. The study is going to examine whether Ethiopia can replicate innovation and economic growth model of East Asia by using Japan, Taiwan, South Korea and China as a case to illustrate their model of growth. This research will be based on empirical data gathering and extended theory of national innovation system and economic growth theory. Moreover, the methodology is based on Knowledge Assessment Methodology (KAM) and also employing cross-countries regression analysis. The results explained that there is a significant relationship between innovation indicators and economic growth in East Asian countries while the relationship is non-existing for Ethiopia except implementing similar policies and achieving similar growth trend. Therefore, Ethiopia needs to introduce inclusive policies that give priority to improving human capital and invest on the knowledge-based economy to replicate East Asian Model.

Keywords: economic growth, FDI, endogenous growth theory, East Asia model

Procedia PDF Downloads 225
28916 Materials for Electrically Driven Aircrafts: Highly Conductive Carbon-Fiber Reinforced Epoxy Composites

Authors: Simon Bard, Martin Demleitner, Florian Schonl, Volker Altstadt

Abstract:

For an electrically driven aircraft, whose engine is based on semiconductors, alternative materials are needed. The avoid hotspots in the materials thermally conductive polymers are necessary. Nevertheless, the mechanical properties of these materials should remain. Herein, the work of three years in a project with airbus and Siemens is presented. Different strategies have been pursued to achieve conductive fiber-reinforced composites: Metal-coated carbon fibers, pitch-based fibers and particle-loaded matrices have been investigated. In addition, a combination of copper-coated fibers and a conductive matrix has been successfully tested for its conductivity and mechanical properties. First, prepregs have been produced with a laboratory scale prepreg line, which can handle materials with maximum width of 300 mm. These materials have then been processed to fiber-reinforced laminates. For the PAN-fiber reinforced laminates, it could be shown that there is a strong dependency between fiber volume content and thermal conductivity. Laminates with 50 vol% of carbon fiber offer a conductivity of 0.6 W/mK, those with 66 vol% of fiber a thermal conductivity of 1 W/mK. With pitch-based fiber, the conductivity enhances to 1.5 W/mK for 61 vol% of fiber, compared to 0.81 W/mK with the same amount of fibers produced from PAN (+83% in conducitivity). The thermal conductivity of PAN-based composites with 50 vol% of fiber is at 0.6 W/mK, their nickel-coated counterparts with the same fiber volume content offer a conductivity of 1 W/mK, an increase of 66%.

Keywords: carbon, electric aircraft, polymer, thermal conductivity

Procedia PDF Downloads 135
28915 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 149
28914 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 220
28913 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches

Authors: Kakali Bhadra

Abstract:

Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.

Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy

Procedia PDF Downloads 207
28912 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches

Authors: Wuttigrai Ngamsirijit

Abstract:

Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.    

Keywords: decision making, human capital analytics, talent management, talent value chain

Procedia PDF Downloads 146
28911 Knowledge Management to Develop the Graduate Study Programs

Authors: Chuen-arom Janthimachai-amorn, Chirawadee Harnrittha

Abstract:

This study aims to identify the factors facilitating the knowledge management to develop the graduate study programs to achieve success and to identify the approaches in developing the graduate study programs in the Rajbhat Suansunantha University. The 10 respondents were the administrators, the faculty, and the personnel of its Graduate School. The research methodology was based on Pla-too Model of the Knowledge Management Institute (KMI) by allocating the knowledge indicators, the knowledge creation and search, knowledge systematization, knowledge processing and filtering, knowledge access, knowledge sharing and exchanges and learning. The results revealed that major success factors were knowledge indicators, evident knowledge management planning, knowledge exchange and strong solidarity of the team and systematic and tenacious access of knowledge. The approaches allowing the researchers to critically develop the graduate study programs were the environmental data analyses, the local needs and general situations, data analyses of the previous programs, cost analyses of the resources, and the identification of the structure and the purposes to develop the new programs.

Keywords: program development, knowledge management, graduate study programs, Rajbhat Suansunantha University

Procedia PDF Downloads 281
28910 Preliminary Results on a Maximum Mean Discrepancy Approach for Seizure Detection

Authors: Boumediene Hamzi, Turky N. AlOtaiby, Saleh AlShebeili, Arwa AlAnqary

Abstract:

We introduce a data-driven method for seizure detection drawing on recent progress in Machine Learning. The method is based on embedding probability measures in a high (or infinite) dimensional reproducing kernel Hilbert space (RKHS) where the Maximum Mean Discrepancy (MMD) is computed. The MMD is metric between probability measures that are computed as the difference between the means of probability measures after being embedded in an RKHS. Working in RKHS provides a convenient, general functional-analytical framework for theoretical understanding of data. We apply this approach to the problem of seizure detection.

Keywords: kernel methods, maximum mean discrepancy, seizure detection, machine learning

Procedia PDF Downloads 207
28909 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 48
28908 Brainbow Image Segmentation Using Bayesian Sequential Partitioning

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate cross talk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds since biological information is inherently included in the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons.

Keywords: brainbow, 3D imaging, image segmentation, neuron morphology, biological data mining, non-parametric learning

Procedia PDF Downloads 458
28907 Inclined Convective Instability in a Porous Layer Saturated with Non-Newtonian Fluid

Authors: Rashmi Dubey

Abstract:

The study aims at investigating the onset of thermal convection in an inclined porous layer saturated with a non-Newtonian fluid. The layer is infinitely extended and has a finite width confined between two boundaries with constant pressure conditions, where the lower one is maintained at a higher temperature. Over the years, this area of research has attracted many scientists and researchers, for it has a plethora of applications in the fields of sciences and engineering, such as in civil engineering, geothermal sites, petroleum industries, etc.Considering the possibilities in a practical scenario, an inclined porous layer is considered, which can be used to develop a generalized model applicable to any inclination. Using the isobaric boundaries, the hydrodynamic boundary conditions are derived for the power-law model and are used to obtain the basic state flow. The convection in the basic state flow is driven by the thermal buoyancy in the flow system and is carried away further due to hydrodynamic boundaries. A linear stability analysis followed by a normal-mode analysis is done to investigate the onset of convection in the buoyancy-driven flow. The analysis shows that the convective instability is always initiated by the non-traveling modes for the Newtonian fluid, but prevails in the form of oscillatory modes, for up to a certain inclination of the porous layer. However, different behavior is observed for the dilatant and pseudoplastic fluids.

Keywords: thermal convection, linear stability, porous media flow, Inclined porous layer

Procedia PDF Downloads 99
28906 Implementation of a Lattice Boltzmann Method for Pulsatile Flow with Moment Based Boundary Condition

Authors: Zainab A. Bu Sinnah, David I. Graham

Abstract:

The Lattice Boltzmann Method has been developed and used to simulate both steady and unsteady fluid flow problems such as turbulent flows, multiphase flow and flows in the vascular system. As an example, the study of blood flow and its properties can give a greater understanding of atherosclerosis and the flow parameters which influence this phenomenon. The blood flow in the vascular system is driven by a pulsating pressure gradient which is produced by the heart. As a very simple model of this, we simulate plane channel flow under periodic forcing. This pulsatile flow is essentially the standard Poiseuille flow except that the flow is driven by the periodic forcing term. Moment boundary conditions, where various moments of the particle distribution function are specified, are applied at solid walls. We used a second-order single relaxation time model and investigated grid convergence using two distinct approaches. In the first approach, we fixed both Reynolds and Womersley numbers and varied relaxation time with grid size. In the second approach, we fixed the Womersley number and relaxation time. The expected second-order convergence was obtained for the second approach. For the first approach, however, the numerical method converged, but not necessarily to the appropriate analytical result. An explanation is given for these observations.

Keywords: Lattice Boltzmann method, single relaxation time, pulsatile flow, moment based boundary condition

Procedia PDF Downloads 209
28905 Internet Economy: Enhancing Information Communication Technology Adaptation, Service Delivery, Content and Digital Skills for Small Holder Farmers in Uganda

Authors: Baker Ssekitto, Ambrose Mbogo

Abstract:

The study reveals that indeed agriculture employs over 70% of Uganda’s population, of which majority are youth and women. The study further reveals that over 70% of the farmers are smallholder farmers based in rural areas, whose operations are greatly affected by; climate change, weak digital skills, limited access to productivity knowledge along value chains, limited access to quality farm inputs, weak logistics systems, limited access to quality extension services, weak business intelligence, limited access to quality markets among others. It finds that the emerging 4th industrial revolution powered by artificial intelligence, 5G and data science will provide possibilities of addressing some of these challenges. Furthermore, the study finds that despite rapid development of ICT4Agric Innovation, their uptake is constrained by a number of factors including; limited awareness of these innovations, low internet and smart phone penetration especially in rural areas, lack of appropriate digital skills, inappropriate programmes implementation models which are project and donor driven, limited articulation of value addition to various stakeholders among others. Majority of farmers and other value chain actors lacked knowledge and skills to harness the power of ICTs, especially their application of ICTs in monitoring and evaluation on quality of service in the extension system and farm level processes.

Keywords: artificial intelligence, productivity, ICT4agriculture, value chain, logistics

Procedia PDF Downloads 57
28904 Success Factors for Innovations in SME Networks

Authors: J. Gochermann

Abstract:

Due to complex markets and products, and increasing need to innovate, cooperation between small and medium size enterprises arose during the last decades, which are not prior driven by process optimization or sales enhancement. Especially small and medium sized enterprises (SME) collaborate increasingly in innovation and knowledge networks to enhance their knowledge and innovation potential, and to find strategic partners for product and market development. These networks are characterized by dual objectives, the superordinate goal of the total network, and the specific objectives of the network members, which can cause target conflicts. Moreover, most SMEs do not have structured innovation processes and they are not accustomed to collaborate in complex innovation projects in an open network structure. On the other hand, SMEs have suitable characteristics for promising networking. They are flexible and spontaneous, they have flat hierarchies, and the acting people are not anonymous. These characteristics indeed distinguish them from bigger concerns. Investigation of German SME networks have been done to identify success factors for SME innovation networks. The fundamental network principles, donation-return and confidence, could be confirmed and identified as basic success factors. Further factors are voluntariness, adequate number of network members, quality of communication, neutrality and competence of the network management, as well as reliability and obligingness of the network services. Innovation and knowledge networks with an appreciable number of members from science and technology institutions need also active sense-making to bring different disciplines into successful collaboration. It has also been investigated, whether and how the involvement in an innovation network impacts the innovation structure and culture inside the member companies. The degree of reaction grows with time and intensity of commitment.

Keywords: innovation and knowledge networks, SME, success factors, innovation structure and culture

Procedia PDF Downloads 257
28903 A Hybrid Traffic Model for Smoothing Traffic Near Merges

Authors: Shiri Elisheva Decktor, Sharon Hornstein

Abstract:

Highway merges and unmarked junctions are key components in any urban road network, which can act as bottlenecks and create traffic disruption. Inefficient highway merges may trigger traffic instabilities such as stop-and-go waves, pose safety conditions and lead to longer journey times. These phenomena occur spontaneously if the average vehicle density exceeds a certain critical value. This study focuses on modeling the traffic using a microscopic traffic flow model. A hybrid traffic model, which combines human-driven and controlled vehicles is assumed. The controlled vehicles obey different driving policies when approaching the merge, or in the vicinity of other vehicles. We developed a co-simulation model in SUMO (Simulation of Urban Mobility), in which the human-driven cars are modeled using the IDM model, and the controlled cars are modeled using a dedicated controller. The scenario chosen for this study is a closed track with one merge and one exit, which could be later implemented using a scaled infrastructure on our lab setup. This will enable us to benchmark the results of this study obtained in simulation, to comparable results in similar conditions in the lab. The metrics chosen for the comparison of the performance of our algorithm on the overall traffic conditions include the average speed, wait time near the merge, and throughput after the merge, measured under different travel demand conditions (low, medium, and heavy traffic).

Keywords: highway merges, traffic modeling, SUMO, driving policy

Procedia PDF Downloads 76