Search results for: community- based approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 37268

Search results for: community- based approach

32468 Design and Implementation a Platform for Adaptive Online Learning Based on Fuzzy Logic

Authors: Budoor Al Abid

Abstract:

Educational systems are increasingly provided as open online services, providing guidance and support for individual learners. To adapt the learning systems, a proper evaluation must be made. This paper builds the evaluation model Fuzzy C Means Adaptive System (FCMAS) based on data mining techniques to assess the difficulty of the questions. The following steps are implemented; first using a dataset from an online international learning system called (slepemapy.cz) the dataset contains over 1300000 records with 9 features for students, questions and answers information with feedback evaluation. Next, a normalization process as preprocessing step was applied. Then FCM clustering algorithms are used to adaptive the difficulty of the questions. The result is three cluster labeled data depending on the higher Wight (easy, Intermediate, difficult). The FCM algorithm gives a label to all the questions one by one. Then Random Forest (RF) Classifier model is constructed on the clustered dataset uses 70% of the dataset for training and 30% for testing; the result of the model is a 99.9% accuracy rate. This approach improves the Adaptive E-learning system because it depends on the student behavior and gives accurate results in the evaluation process more than the evaluation system that depends on feedback only.

Keywords: machine learning, adaptive, fuzzy logic, data mining

Procedia PDF Downloads 177
32467 Voluntary Work Monetary Value and Cost-Benefit Analysis with 'Value Audit and Voluntary Investment' Technique: Case Study of Yazd Red Crescent Society Youth Members Voluntary Work in Health and Safety Plan for New Year's Passengers

Authors: Hamed Seddighi Khavidak

Abstract:

Voluntary work has a lot of economic and social benefits for a country, but the economic value is ignored because it is voluntary. The aim of this study is reviewing Monetary Value of Voluntary Work methods and comparing opportunity cost method and replacement cost method both in theory and in practice. Beside monetary value, in this study, we discuss cost-benefit analysis of health and safety plan in the New Year that conducted by young volunteers of Red Crescent society of Iran. Method: We discussed eight methods for monetary value of voluntary work including: Alternative-Employment Wage Approach, Leisure-Adjusted OCA, Volunteer Judgment OCA, Replacement Wage Approach, Volunteer Judgment RWA, Supervisor Judgment RWA, Cost of Counterpart Goods and Services and Beneficiary Judgment. Also, for cost benefit analysis we drew on 'value audit and volunteer investment' (VIVA) technique that is used widely in voluntary organizations like international federation of Red Cross and Red Crescent societies. Findings: In this study, using replacement cost approach, voluntary work by 1034 youth volunteers was valued 938000000 Riyals and using Replacement Wage Approach it was valued 2268713232 Riyals. Moreover, Yazd Red Crescent Society spent 212800000 Riyals on food and other costs for these volunteers. Discussion and conclusion: In this study, using cost benefit analysis method that is Volunteer Investment and Value Audit (VIVA), VIVA rate showed that for every Riyal that the Red Crescent Society invested in the health and safety of New Year's travelers in its volunteer project, four Riyals returned, and using the wage replacement approach, 11 Riyals returned. Therefore, New Year's travelers health and safety project were successful and economically, it was worthwhile for the Red Crescent Society because the output was much bigger than the input costs.

Keywords: voluntary work, monetary value, youth, red crescent society

Procedia PDF Downloads 203
32466 An Assessment of Financial Viability and Sustainability of Hydroponics Using Reclaimed Water Using LCA and LCC

Authors: Muhammad Abdullah, Muhammad Atiq Ur Rehman Tariq, Faraz Ul Haq

Abstract:

In developed countries, sustainability measures are widely accepted and acknowledged as crucial for addressing environmental concerns. Hydroponics, a soilless cultivation technique, has emerged as a potentially sustainable solution as it can reduce water consumption, land use, and environmental impacts. However, hydroponics may not be economically viable, especially when using reclaimed water, which may entail additional costs and risks. This study aims to address the critical question of whether hydroponics using reclaimed water can achieve a balance between sustainability and financial viability. Life Cycle Assessment (LCA) and Life Cycle Cost (LCC) will be integrated to assess the potential of hydroponics whether it is environmentally sustainable and economically viable. Life cycle assessment, or LCA, is a methodology for assessing environmental impacts associated with all the stages of the life cycle of a commercial product, process, or service. While Life Cycle Cost (LCC) is an approach that assesses the total cost of an asset over its life cycle, including initial capital costs and maintenance costs. The expected benefits of this study include supporting evidence-based decision-making for policymakers, farmers, and stakeholders involved in agriculture. By quantifying environmental impacts and economic costs, this research will facilitate informed choices regarding the adoption of hydroponics with reclaimed water. It is believed that the outcomes of this research work will help to achieve a sustainable approach to agricultural production, aligning with sustainability goals while considering economic factors by adopting hydroponic technique.

Keywords: hydroponic, life cycle assessment, life cycle cost, sustainability

Procedia PDF Downloads 61
32465 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film

Authors: Li Long, Thomas Ortlepp

Abstract:

A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.

Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor

Procedia PDF Downloads 118
32464 Learning on the Go: Practicing Vocabulary with Mobile Apps

Authors: Shoba Bandi-Rao

Abstract:

The lack of college readiness is one of the major contributors to low graduation rates at community colleges, especially among educationally and financially disadvantaged students. About 45% of underprepared high school graduates are required to complete ‘remedial’ reading/writing courses before they can begin taking college-level courses. Mobile apps present ‘bite-size’ learning materials that can be useful for practicing certain literacy skills, such as vocabulary learning. The convenience of mobile phones is ideal for a majority of students at community colleges who hold full or part-time jobs. Mobile apps allow students to learn during small ‘chunks’ of time available to them outside of the class—during subway commute, between classes, etc. Learning with mobile apps is a relatively new area in research, and their effectiveness for learning new words has been inconclusive. Using Mishra & Koehler’s TPCK theoretical framework, this study explored the effectiveness of the mobile app (Quizlet) for learning one hundred common college-level words in ‘remedial’ writing class over one semester. Each week, before coming to class, students studied a list of 10-15 words presented in context within sentences. Students came across these words in the article they read in class making their learning more meaningful. A pre and post-test measured the number of words students knew, learned and remembered. Statistical analysis shows that students performed better by 41% on the post-test indicating that the mobile app was helpful for learning words. Students also completed a short survey each week that sought to determine the amount of time students spent on the vocabulary app. A positive correlation was found between the amount of time spent on the mobile app and the number of words learned. The goal of this research is to capitalize on the convenience of smartphones to (1) better prepare them for college-level course work, and (2) contribute to current literature on mobile learning.

Keywords: mobile learning, vocabulary learning, literacy skills, Quizlet

Procedia PDF Downloads 212
32463 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.

Keywords: dispersion, electromagnetic waves, microwaves, optics

Procedia PDF Downloads 117
32462 Measuring Output Multipliers of Energy Consumption and Manufacturing Sectors in Malaysia during the Global Financial Crisis

Authors: Hussain Ali Bekhet, Tuan Ab. Rashid Bin Tuan Abdullah, Tahira Yasmin

Abstract:

The strong relationship between energy consumption and economic growth is widely recognised. Most countries’ energy demand declined during the economic depression known as the Global Financial Crisis (GFC) of 2008–2009. The objective of the current study is to investigate the energy consumption and performance of Malaysia’s manufacturing sectors during the GFC. We applied the output multiplier approach, which is based on the input-output model. Two input-output tables of Malaysia covering 2005 and 2010 were used. The results indicate significant changes in the output multipliers of the manufacturing sectors between 2005 and 2010. Moreover, the energy-to-manufacturing sectors’ output multipliers also decreased during the GFC due to a decline in export-oriented industries during the crisis. The increasing importance of the manufacturing sector to the development of Malaysian trade resulted in a noticeable decrease in the consumption of each energy sector’s output, especially the electricity and gas sector. Based on the research findings, the Malaysian government released several policy implementations in the form of stimulus packages to enhance these sectors’ performance and generally improve the Malaysian economy.

Keywords: global financial crisis, input-output model, manufacturing, output multipliers, energy, Malaysia

Procedia PDF Downloads 714
32461 De-Securitizing Identity: Narrative (In)Consistency in Periods of Transition

Authors: Katerina Antoniou

Abstract:

When examining conflicts around the world, it is evident that the majority of intractable conflicts are steeped in identity. Identity seems to be not only a causal variable for conflict, but also a catalytic parameter for the process of reconciliation that follows ceasefire. This paper focuses on the process of identity securitization that occurs between rival groups of heterogeneous collective identities – ethnic, national or religious – as well as on the relationship between identity securitization and the ability of the groups involved to reconcile. Are securitized identities obstacles to the process of reconciliation, able to hinder any prospects of peace? If the level to which an identity is securitized is catalytic to a conflict’s discourse and settlement, then which factors act as indicators of identity de-securitization? The level of an in-group’s identity securitization can be estimated through a number of indicators, one of which is narrative. The stories, views and stances each in-group adopts in relation to its history of conflict and relation with their rival out-group can clarify whether that specific in-group feels victimized and threatened or safe and ready to reconcile. Accordingly, this study discusses identity securitization through narrative in relation to intractable conflicts. Are there conflicts around the world that, despite having been identified as intractable, stagnated or insoluble, show signs of identity de-securitization through narrative? This inquiry uses the case of the Cyprus conflict and its partitioned societies to present official narratives from the two communities and assess whether these narratives have transformed, indicating a less securitized in-group identity for the Greek and Turkish Cypriots. Specifically, the study compares the official historical overviews presented by each community’s Ministry of Foreign Affairs website and discusses the extent to which the two official narratives present a securitized collective identity. In addition, the study will observe whether official stances by the two communities – as adopted by community leaders – have transformed to depict less securitization over time. Additionally, the leaders’ reflection of popular opinion is evaluated through recent opinion polls from each community. Cyprus is currently experiencing renewed optimism for reunification, with the leaders of its two communities engaging in rigorous negotiations, and with rumors calling for a potential referendum for reunification to be taking place even as early as within 2016. Although leaders’ have shown a shift in their rhetoric and have moved away from narratives of victimization, this is not the case for the official narratives used by their respective ministries of foreign affairs. The study’s findings explore whether this narrative inconsistency proves that Cyprus is transitioning towards reunification, or whether the leaders are risking sending a securitized population to the polls to reject a potential reunification. More broadly, this study suggests that in the event that intractable conflicts might be moving towards viable peace, in-group narratives--official narratives in particular--can act as indicators of the extent to which rival entities have managed to reconcile.

Keywords: conflict, identity, narrative, reconciliation

Procedia PDF Downloads 311
32460 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 344
32459 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance

Procedia PDF Downloads 325
32458 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach

Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra

Abstract:

Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.

Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis

Procedia PDF Downloads 232
32457 Dilution of Saline Irrigation Based on Plant's Physiological Responses to Salt Stress Following by Re-Watering

Authors: Qaiser Javed, Ahmad Azeem

Abstract:

Salinity and water scarcity are major environmental problems which are limiting the agricultural production. This research was conducted to construct a model to find out appropriate regime to dilute saline water based on physiological and electrophysiological properties of Brassica napus L., and Orychophragmus violaceus (L.). Plants were treated under salt-stressed concentrations of NaCl (NL₁: 2.5, NL₂: 5, NL₃: 10; gL⁻¹), Na₂SO₄ (NO₁: 2.5, NO₂: 5, NO₃: 10; gL⁻¹), and mixed salt concentration (MX₁: NL₁+ NO₃; MX₂: NL₃+ NO₁; MX₃: NL₂+ NO₂; gL⁻¹) and 0 as control, followed by re-watering. Growth, physiological and electrophysiology traits were highly restricted under high salt concentration levels at NL₃, NO₃, MX₁, and MX₂, respectively. However, during the rewatering phase, growth, electrophysiological, and physiological parameters were recovered well. Consequently, the increase in net photosynthetic rate was noted under moderate stress condition which was 44.13, 37.07, and 43.01%, respectively in Orychophragmus violaceus (L.) and 44.94%, 53.45%, and 63.04%, respectively were found in Brassica napus L. According to the results, the best dilution point was 5–2.5% for NaCl and Na₂SO₄ alternatively, whereas it was 10–0.0% for the mixture of salts. Therefore, the effect of salinity in O. violaceus and B. napus may also be reduced effectively by dilution of saline irrigation. It would be a better approach to utilize dilute saline water for irrigation instead of applies direct saline water to plant. This study provides new insight in the field of agricultural engineering to plan irrigation scheduling considering the crop ability to salt tolerance and irrigation water use efficiency by apply specific quantity of irrigation calculated based on the salt dilution point. It would be helpful to balance between irrigation amount and optimum crop water consumption in salt-affected regions and to utilize saline water in order to safe freshwater resources.

Keywords: dilution model, plant growth traits, re-watering, salt stress

Procedia PDF Downloads 146
32456 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning

Authors: Andrea Trevino Gavito, Diego Klabjan, Sanjiv Shah

Abstract:

Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize the decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, 25.9% in accuracy, and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.

Keywords: artificial intelligence, machine learning, unsupervised learning, self-supervised representation learning, echocardiography, echocardiographic view detection

Procedia PDF Downloads 4
32455 A Comprehensive Approach to Mitigate Return-Oriented Programming Attacks: Combining Operating System Protection Mechanisms and Hardware-Assisted Techniques

Authors: Zhang Xingnan, Huang Jingjia, Feng Yue, Burra Venkata Durga Kumar

Abstract:

This paper proposes a comprehensive approach to mitigate ROP (Return-Oriented Programming) attacks by combining internal operating system protection mechanisms and hardware-assisted techniques. Through extensive literature review, we identify the effectiveness of ASLR (Address Space Layout Randomization) and LBR (Last Branch Record) in preventing ROP attacks. We present a process involving buffer overflow detection, hardware-assisted ROP attack detection, and the use of Turing detection technology to monitor control flow behavior. We envision a specialized tool that views and analyzes the last branch record, compares control flow with a baseline, and outputs differences in natural language. This tool offers a graphical interface, facilitating the prevention and detection of ROP attacks. The proposed approach and tool provide practical solutions for enhancing software security.

Keywords: operating system, ROP attacks, returning-oriented programming attacks, ASLR, LBR, CFI, DEP, code randomization, hardware-assisted CFI

Procedia PDF Downloads 81
32454 Your First Step to Understanding Research Ethics: Psychoneurolinguistic Approach

Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari

Abstract:

Objective: This research aims at investigating the research ethics in the field of science. Method: It is an exploratory research wherein the researchers attempted to cover the phenomenon at hand from all specialists’ viewpoints. Results Discussion is based upon the findings resulted from the analysis the researcher undertook. Concerning the results’ prediction, the researcher needs first to seek highly qualified people in the field of research as well as in the field of statistics who share the philosophy of the research. Then s/he should make sure that s/he is adequately trained in the specific techniques, methods and statically programs that are used at the study. S/he should also believe in continually analysis for the data in the most current methods.

Keywords: research ethics, legal, rights, psychoneurolinguistics

Procedia PDF Downloads 22
32453 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling

Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang

Abstract:

Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.

Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model

Procedia PDF Downloads 125
32452 Understanding Talent Management In French Small And Medium-Sized Enterprises: Towards Multi-Level Modeling

Authors: Abid Kousay

Abstract:

Appeared and developed essentially in large companies and multinationals, Talent Management (TM) in Small and Medium-Sized Enterprises (SMEs) has remained an under-explored subject till today. Although the literature on TM in the Anglo-Saxon context is developing, it remains monopolized in non-European contexts, especially in France. Therefore, this article aims to address these shortcomings through contributing to TM issues by adopting a multilevel approach holding the goal of reaching a global holistic vision of interactions between various levels while applying TM. A qualitative research study carried out within 12 SMEs in France, built on the methodological perspective of grounded theory, will be used in order to go beyond description, to generate or discover a theory or even a unified theoretical explanation. Our theoretical contributions are the results of the grounded theory, the fruit of context considerations and the dynamic of the multilevel approach. We aim firstly to determine the perception of talent and TM in SMEs. Secondly, we formalize TM in SME through the empowerment of all 3 levels in the organization (individual, collective, and organizational). And we generate a multilevel dynamic system model, highlighting the institutionalization dimension in SMEs and the managerial conviction characterized by the domination of the leader’s role. Thirdly, this first study sheds light on the importance of rigorous implementation of TM in SMEs in France by directing CEO and HR and TM managers to focus on elements that upstream TM implementation and influence the system internally. Indeed, our systematic multilevel approach policy reminds them of the importance of strategic alignment while translating TM policy into strategies and practices in SMEs.

Keywords: French context, multilevel approach, talent management, , TM system

Procedia PDF Downloads 205
32451 Design Approach for the Development of Format-Flexible Packaging Machines

Authors: G. Götz, P. Stich, J. Backhaus, G. Reinhart

Abstract:

The rising demand for format-flexible packaging machines is caused by current market changes. Increasing the formatflexibility is a new goal for the packaging machine manufacturers’ product development process. There are no methodical or designorientated tools for a comprehensive consideration of this target. This paper defines the term format-flexibility in the context of packaging machines and shows the state-of-the-art for improving the changeover of production machines. The requirements for a new approach and the concept itself will be introduced, and the method elements will be explained. Finally, the use of the concept and the result of the development of a format-flexible packaging machine will be shown.

Keywords: packaging machine, format-flexibility, changeover, design method

Procedia PDF Downloads 421
32450 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints

Authors: Safa Adi

Abstract:

This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.

Keywords: database, GTC algorithm, PSP algorithm, sequential patterns, time constraints

Procedia PDF Downloads 373
32449 Re-Integrating Historic Lakes into the City Fabric in the Case of Vandiyur Lake, Madurai

Authors: Soumya Pugal

Abstract:

The traditional lake system of an ancient town is a network of water holding blue spaces, erected further than 2000 years ago by the rulers of ancient cities and maintained for centuries by the original communities. These blue spaces form a micro-watershed wherein an individual tank has its own catchment, tank bed area, and command area. These lakes are connected by a common sluice from the upstream tank, thereby feeding the downstream tank. The lakes used to be of socio-economic significance in those times, but the rapid growth of the city, as well as the change in systems of ownership of the lakes, have turned them into the backyard of urban development. Madurai is one such historic city to be facing the issues of finding a balance to the social, ecological, and profitable requirements of the people with respect to the traditional lake system. To find a solution to problems caused by the neglect of vital ecological systems of a city, the theory of transformative placemaking through water sensitive urban design has been explored. This approach re-invents the relationship between the people and the urban lakes to suit the modern aspirations while respecting the environment. The thesis aims to develop strategies to guide the development along the major urban lake of Vandiyur to equip the lake to meet the growing requirements of the megacity in terms of its recreational requirements and give a renewed connection between people and water. The intent of the design is to understand the ecological and social structures of the lake and find ways to use the lake to produce social cohesion within the community and balance the city's profitable and ecological requirements by using transformative placemaking through water sensitive urban design..

Keywords: urban lakes, urban blue spaces, placemaking, revitalisation of lakes, urban cohesion

Procedia PDF Downloads 61
32448 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach

Authors: Marcelo Pereira Introini

Abstract:

The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.

Keywords: global value chains, innovation systems, intellectual monopoly, technological development

Procedia PDF Downloads 67
32447 Designing Mobile Application to Motivate Young People to Visit Cultural Heritage Sites

Authors: Yuko Hiramatsu, Fumihiro Sato, Atsushi Ito, Hiroyuki Hatano, Mie Sato, Yu Watanabe, Akira Sasaki

Abstract:

This paper presents a mobile phone application developed for sightseeing in Nikko, one of the cultural world heritages in Japan, using the BLE (Bluetooth Low Energy) beacon. Based on our pre-research, we decided to design our application for young people who walk around the area actively, but know little about the tradition and culture of Nikko. One solution is to construct many information boards to explain; however, it is difficult to construct new guide plates in cultural world heritage sites. The smartphone is a good solution to send such information to such visitors. This application was designed using a combination of the smartphone and beacons, set in the area, so that when a tourist passes near a beacon, the application displays information about the area including a map, historical or cultural information about the temples and shrines, and local shops nearby as well as a bus timetable. It is useful for foreigners, too. In addition, we developed quizzes relating to the culture and tradition of Nikko to provide information based on the Zeigarnik effect, a psychological effect. According to the results of our trials, tourists positively evaluated the basic information and young people who used the quiz function were able to learn the historical and cultural points. This application helped young visitors at Nikko to understand the cultural elements of the site. In addition, this application has a function to send notifications. This function is designed to provide information about the local community such as shops, local transportation companies and information office. The application hopes to also encourage people living in the area, and such cooperation from the local people will make this application vivid and inspire young visitors to feel that the cultural heritage site is still alive today. This is a gateway for young people to learn about a traditional place and understand the gravity of preserving such areas.

Keywords: BLE beacon, smartphone application, Zeigarnik effect, world heritage site, school trip

Procedia PDF Downloads 310
32446 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 126
32445 The Importance of Changing the Traditional Mode of Higher Education in Bangladesh: Creating Huge Job Opportunities for Home and Abroad

Authors: M. M. Shahidul Hassan, Omiya Hassan

Abstract:

Bangladesh has set its goal to reach upper middle-income country status by 2024. To attain this status, the country must satisfy the World Bank requirement of achieving minimum Gross National Income (GNI). Number of youth job seekers in the country is increasing. University graduates are looking for decent jobs. So, the vital issue of this country is to understand how the GNI and jobs can be increased. The objective of this paper is to address these issues and find ways to create more job opportunities for youths at home and abroad which will increase the country’s GNI. The paper studies proportion of different goods Bangladesh exported, and also the percentage of employment in different sectors. The data used here for the purpose of analysis have been collected from the available literature. These data are then plotted and analyzed. Through these studies, it is concluded that growth in sectors like agricultural, ready-made garments (RMG), jute industries and fisheries are declining and the business community is not interested in setting up capital-intensive industries. Under this situation, the country needs to explore other business opportunities for a higher economic growth rate. Knowledge can substitute the physical resource. Since the country consists of the large youth population, higher education will play a key role in economic development. It now needs graduates with higher-order skills with innovative quality. Such dispositions demand changes in a university’s curriculum, teaching and assessment method which will function young generations as active learners and creators. By bringing these changes in higher education, a knowledge-based society can be created. The application of such knowledge and creativity will then become the commodity of Bangladesh which will help to reach its goal as an upper middle-income country.

Keywords: Bangladesh, economic sectors, economic growth, higher education, knowledge-based economy, massifcation of higher education, teaching and learning, universities’ role in society

Procedia PDF Downloads 151
32444 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function

Authors: S. B. Provost, Hossein Zareamoghaddam

Abstract:

A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.

Keywords: density estimation, log-density, moments, Pearson's curve system

Procedia PDF Downloads 263
32443 A Numerical Study on Micromechanical Aspects in Short Fiber Composites

Authors: I. Ioannou, I. M. Gitman

Abstract:

This study focused on the contribution of micro-mechanical parameters on the macro-mechanical response of short fiber composites, namely polypropylene matrix reinforced by glass fibers. In the framework of this paper, an attention has been given to the glass fibers length, as micromechanical parameter influences the overall macroscopic material’s behavior. Three dimensional numerical models were developed and analyzed through the concept of a Representative Volume Element (RVE). Results of the RVE-based approach were compared with analytical Halpin-Tsai’s model.

Keywords: effective properties, homogenization, representative volume element, short fiber reinforced composites

Procedia PDF Downloads 253
32442 Virtual Container Yard: Assessing the Perceived Impact of Legal Implications to Container Carriers

Authors: L. Edirisinghe, P. Mukherjee, H. Edirisinghe

Abstract:

Virtual Container Yard (VCY) is a modern concept that helps to reduce the empty container repositioning cost of carriers. The concept of VCY is based on container interchange between shipping lines. Although this mechanism has been theoretically accepted by the shipping community as a feasible solution, it has not yet achieved the necessary momentum among container shipping lines (CSL). This paper investigates whether there is any legal influence on this industry myopia about the VCY. It is believed that this is the first publication that focuses on the legal aspects of container exchange between carriers. Not much literature on this subject is available. This study establishes with statistical evidence that there is a phobia prevailing in the shipping industry that exchanging containers with other carriers may lead to various legal implications. The complexity of exchange is two faceted. CSLs assume that offering a container to another carrier (obviously, a competitor in terms of commercial context) or using a container offered by another carrier may lead to undue legal implications. This research reveals that this fear is reflected through four types of perceived components, namely: shipping associate; warehouse associate; network associate; and trading associate. These components carry eighteen subcomponents that comprehensively cover the entire process of a container shipment. The statistical explanation has been supported through regression analysis; INCO terms were used to illustrate the shipping process.

Keywords: virtual container yard, legal, maritime law, inventory

Procedia PDF Downloads 153
32441 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach

Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre

Abstract:

The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.

Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast

Procedia PDF Downloads 202
32440 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act

Authors: Sevcan Ozturk

Abstract:

The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.

Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard

Procedia PDF Downloads 314
32439 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 132