Search results for: student academic performance
887 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal
Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali
Abstract:
The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management
Procedia PDF Downloads 81886 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 106885 A Geometric Based Hybrid Approach for Facial Feature Localization
Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik
Abstract:
Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.Keywords: biometrics, face recognition, facial landmarks, image processing
Procedia PDF Downloads 412884 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards
Authors: Nadezhda Kvatashidze, Elena Kharabadze
Abstract:
It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value
Procedia PDF Downloads 320883 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations
Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim
Abstract:
A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.Keywords: gerotor pump, high speed, numerical simulations, aeronautic, aeration, cavitation
Procedia PDF Downloads 133882 A Green Process for Drop-In Liquid Fuels from Carbon Dioxide, Water, and Solar Energy
Authors: Jian Yu
Abstract:
Carbo dioxide (CO2) from fossil fuel combustion is a prime green-house gas emission. It can be mitigated by microalgae through conventional photosynthesis. The algal oil is a feedstock of biodiesel, a carbon neutral liquid fuel for transportation. The conventional CO2 fixation, however, is quite slow and affected by the intermittent solar irradiation. It is also a technical challenge to reform the bio-oil into a drop-in liquid fuel that can be directly used in the modern combustion engines with expected performance. Here, an artificial photosynthesis system is presented to produce a biopolyester and liquid fuels from CO2, water, and solar power. In this green process, solar energy is captured using photovoltaic modules and converted into hydrogen as a stable energy source via water electrolysis. The solar hydrogen is then used to fix CO2 by Cupriavidus necator, a hydrogen-oxidizing bacterium. Under the autotrophic conditions, CO2 was reduced to glyceraldehyde-3-phosphate (G3P) that is further utilized for cell growth and biosynthesis of polyhydroxybutyrate (PHB). The maximum cell growth rate reached 10.1 g L-1 day-1, about 25 times faster than that of a typical bio-oil-producing microalga (Neochloris Oleoabundans) under stable indoor conditions. With nitrogen nutrient limitation, a large portion of the reduced carbon is stored in PHB (C4H6O2)n, accounting for 50-60% of dry cell mass. PHB is a biodegradable thermoplastic that can find a variety of environmentally friendly applications. It is also a platform material from which small chemicals can be derived. At a high temperature (240 - 290 oC), the biopolyester is degraded into crotonic acid (C4H6O2). On a solid phosphoric acid catalyst, PHB is deoxygenated via decarboxylation into a hydrocarbon oil (C6-C18) at 240 oC or so. Aromatics and alkenes are the major compounds, depending on the reaction conditions. A gasoline-grade liquid fuel (77 wt% oil) and a biodiesel-grade fuel (23 wt% oil) were obtained from the hydrocarbon oil via distillation. The formation routes of hydrocarbon oil from crotonic acid, the major PHB degradation intermediate, are revealed and discussed. This work shows a novel green process from which biodegradable plastics and high-grade liquid fuels can be directly produced from carbon dioxide, water and solar power. The productivity of the green polyester (5.3 g L-1 d-1) is much higher than that of microalgal oil (0.13 g L-1 d-1). Other technical merits of the new green process may include continuous operation under intermittent solar irradiation and convenient scale up in outdoor.Keywords: bioplastics, carbon dioxide fixation, drop-in liquid fuels, green process
Procedia PDF Downloads 189881 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin
Authors: Ankush Raina, Ankush Anand
Abstract:
Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.Keywords: boundary lubrication, copper oxide, friction, nano diamond
Procedia PDF Downloads 123880 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 247879 Complaint Management Mechanism: A Workplace Solution in Development Sector of Bangladesh
Authors: Nusrat Zabeen Islam
Abstract:
Partnership between local Non-Government organizations (NGO) and International development organizations has become an important feature in the development sector of Bangladesh. It is an important challenge for International development organizations to work with local NGOs with proper HR practice. Local NGOs have a lack of quality working environment and this affects the employee’s work experiences and overall performance at individual, partnership with International development organizations and organizational level. Many local development organizations due to the size of the organization and scope do not have a human resource (HR) unit. Inadequate Human Resource Policies, skills, leadership and lack of effective strategy is now a common scenario in Non-Government organization sector of Bangladesh. So corruption, nepotism, and fraud, risk of Political Contribution in office /work space, Sexual/ gender based abuse, insecurity take place in work place of development sector. The Complaint Management Mechanism (CMM) in human resource management could be one way to improve human resource competence in these organizations. The responsibility of Complaint Management Unit (CMU) of an International development organization is to make workplace maltreating, discriminating communities free. The information of impact of CMM was collected through case study of an International organization and some of its partner national organizations in Bangladesh who are engaged in different projects/programs. In this mechanism International development organizations collect complaints from beneficiaries/ staffs by complaint management unit and investigate by segregating the type and mood of the complaint and find out solution to improve the situation within a very short period. A complaint management committee is formed jointly with HR and management personnel. Concerned focal point collect complaints and share with CM unit. By conducting investigation, review of findings, reply back to CM unit and implementation of resolution through this mechanism, a successful bridge of communication and feedback can be established within beneficiaries, staffs and upper management. The overall result of Complaint management mechanism application indicates that by applying CMM accountability and transparency of workplace and workforce in development organization can be increased significantly. Evaluations based on outcomes, and measuring indicators such as productivity, satisfaction, retention, gender equity, proper judgment will guide organizations in building a healthy workforce, and will also clearly articulate the return on investment and justify any need for further funding.Keywords: human resource management in NGOs, challenges in human resource, workplace environment, complaint management mechanism
Procedia PDF Downloads 322878 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 303877 Educating through Design: Eco-Architecture as a Form of Public Awareness
Authors: Carmela Cucuzzella, Jean-Pierre Chupin
Abstract:
Eco-architecture today is being assessed and judged increasingly on the basis of its environmental performance and its dedication to urgent stakes of sustainability. Architects have responded to environmental imperatives in novel ways since the 1960s. In the last two decades, however, different forms of eco-architecture practices have emerged that seem to be as dedicated to the issues of sustainability, as to their ability to 'communicate' their ecological features. The hypothesis is that some contemporary eco-architecture has been developing a characteristic 'explanatory discourse', of which it is possible to identify in buildings around the world. Some eco-architecture practices do not simply demonstrate their alignment with pressing ecological issues, rather, these buildings seem to be also driven by the urgent need to explain their ‘greenness’. The design aims specifically to teach visitors of the eco-qualities. These types of architectural practices are referred to in this paper as eco-didactic. The aim of this paper is to identify and assess this distinctive form of environmental architecture practice that aims to teach. These buildings constitute an entirely new form of design practice that places eco-messages squarely in the public realm. These eco-messages appear to have a variety of purposes: (i) to raise awareness of unsustainable quotidian habits, (ii) to become means of behavioral change, (iii) to publicly announce their responsibility through the designed eco-features, or (iv) to engage the patrons of the building into some form of sustainable interaction. To do this, a comprehensive review of Canadian eco-architecture is conducted since 1998. Their potential eco-didactic aspects are analysed through a lens of three vectors: (1) cognitive visitor experience: between the desire to inform and the poetics of form (are parts of the design dedicated to inform the visitors of the environmental aspects?); (2) formal architectural qualities: between the visibility and the invisibility of environmental features (are these eco-features clearly visible by the visitors?); and (3) communicative method for delivering eco-message: this transmission of knowledge is accomplished somewhere between consensus and dissensus as a method for disseminating the eco-message (do visitors question the eco-features or are they accepted by visitors as features that are environmental?). These architectural forms distinguish themselves in their crossing of disciplines, specifically, architecture, environmental design, and art. They also differ from other architectural practices in terms of how they aim to mobilize different publics within various urban landscapes The diversity of such buildings, from how and what they aim to communicate, to the audience they wish to engage, are all key parameters to better understand their means of knowledge transfer. Cases from the major cities across Canada are analysed, aiming to illustrate this increasing worldwide phenomenon.Keywords: eco-architecture, public awareness, community engagement, didacticism, communication
Procedia PDF Downloads 124876 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System
Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich
Abstract:
The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.Keywords: automated vehicle, driver behavior, human factors, human-machine system
Procedia PDF Downloads 145875 Comparative Study on Fire Safety Evaluation Methods for External Cladding Systems: ISO 13785-2 and BS 8414
Authors: Kyungsuk Cho, H. Y. Kim, S. U. Chae, J. H. Choi
Abstract:
Technological development has led to the construction of super-tall buildings and insulators are increasingly used as exterior finishing materials to save energy. However, insulators are usually combustible and vulnerable to fire. Fires like that at Wooshin Golden Suite Building in Busan, Korea in 2010 and that at CCTV Building in Beijing, China are the major examples of fire spread accelerated by combustible insulators. The exterior finishing materials of a high-rise building are not made of insulators only, but they are integrated with the building’s external cladding system. There is a limit in evaluating the fire safety of a cladding system with a single small-unit material such as a cone calorimeter. Therefore, countries provide codes to evaluate the fire safety of exterior finishing materials using full-scale tests. This study comparesKeywords: external cladding systems, fire safety evaluation, ISO 13785-2, BS 8414
Procedia PDF Downloads 242874 Assessment of Milk Quality in Vehari: Evaluation of Public Health Concerns
Authors: Muhammad Farhan Saeed, Waheed Aslam Khan, Muhammad Nadeem, Iftikhar Ahmad, Zakir Ali
Abstract:
Milk is an important and fundamental nutrition source of human diet. In Pakistan, the milk used by the consumer is of low quality and is often contaminated due to the lack of quality controls. Mycotoxins produced from molds which contaminate the agriculture commodities of animal feed. Mycotoxins are poisons which affect the animals when they consume contaminated feeds. Aflatoxin AFM1 is naturally occurring form of mycotoxins in milk which is carcinogenic. To assess public awareness regarding milk Aflatoxin contamination, a population-based survey using a questionnaire was carried out from general public and from farmers of both rural and urban areas. It was revealed from the data that people of rural area were more satisfied about quality of available milk but the awareness level about milk contamination was found lower in both areas. Total 297 samples of milk were collected from rural (n=156) and urban (n=141) areas of district Vehari during June-July 2015. Milk samples were collected from three different point sources; farmer, milkman and milkshop. These point sources had three types of dairy milk including cow milk, buffalo milk and mixed milk. After performing ELISA test 18 samples with positive ELISA results were maintain per source for further analysis for aflatoxin M1 (AFM1) by High Performance Liquid Chromatography (HPLC). Higher percentages of samples were found exceeding the permissible limit for urban area. In rural area about 15% samples and from urban area about 35% samples were exceeded the permissible limit of AFM1 with 0.05µg/kg set by European Union. From urban areas about 55% of buffalo, 33% of cows and 17% of mixed milk samples were exceeded the permissible AFM1 level as compared with 17%, 11% and 17% for milk samples from rural areas respectively. Samples from urban areas 33%, 44% and 28% were exceeded the permissible AFM1 level for farmer, milkman and of milk shop respectively as compared with 28% and 17% of farmer and milkman’s samples from rural areas respectively. The presence of AFM1 in milk samples demands the implementation of strict regulations and also urges the need for continuous monitoring of milk and milk products in order to minimize the health hazards. Regulations regarding aflatoxins contamination and adulteration should be strictly imposed to prevent health problems related to milk quality. Permissible limits for aflatoxin should be enforced strongly in Pakistan so that economic loss due to aflatoxin contamination can be reduced.Keywords: Vehari, aflatoxins AFM1, milk, HPLC
Procedia PDF Downloads 374873 The Effect of Using Universal Design for Learning to Improve the Quality of Vocational Programme with Intellectual Disabilities and the Challenges Facing This Method from the Teachers' Point of View
Authors: Ohud Adnan Saffar
Abstract:
This study aims to know the effect of using universal design for learning (UDL) to improve the quality of vocational programme with intellectual disabilities (SID) and the challenges facing this method from the teachers' point of view. The significance of the study: There are comparatively few published studies on UDL in emerging nations. Therefore, this study will encourage the researchers to consider a new approaches teaching. Development of this study will contribute significant information on the cognitively disabled community on a universal scope. In order to collect and evaluate the data and for the verification of the results, this study has been used the mixed research method, by using two groups comparison method. To answer the study questions, we were used the questionnaire, lists of observations, open questions, and pre and post-test. Thus, the study explored the advantages and drawbacks, and know about the impact of using the UDL method on integrating SID with students non-special education needs in the same classroom. Those aims were realized by developing a workshop to explain the three principles of the UDL and train (16) teachers in how to apply this method to teach (12) students non-special education needs and the (12) SID in the same classroom, then take their opinion by using the questionnaire and questions. Finally, this research will explore the effects of the UDL on the teaching of professional photography skills for the SID in Saudi Arabia. To achieve this goal, the research method was a comparison of the performance of the SID using the UDL method with that of female students with the same challenges applying other strategies by teachers in control and experiment groups, we used the observation lists, pre and post-test. Initial results: It is clear from the previous response to the participants that most of the answers confirmed that the use of UDL achieves the principle of inclusion between the SID and students non-special education needs by 93.8%. In addition, the results show that the majority of the sampled people see that the most important advantages of using UDL in teaching are creating an interactive environment with using new and various teaching methods, with a percentage of 56.2%. Following this result, the UDL is useful for integrating students with general education, with a percentage of 31.2%. Moreover, the finding indicates to improve understanding through using the new technology and exchanging the primitive ways of teaching with the new ones, with a percentage of 25%. The result shows the percentages of the sampled people's opinions about the financial obstacles, and it concluded that the majority see that the cost is high and there is no computer maintenance available, with 50%. There are no smart devices in schools to help in implementing and applying for the program, with a percentage of 43.8%.Keywords: universal design for learning, intellectual disabilities, vocational programme, the challenges facing this method
Procedia PDF Downloads 129872 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 83871 Invasive Asian Carp Fish Species: A Natural and Sustainable Source of Methionine for Organic Poultry Production
Authors: Komala Arsi, Ann M. Donoghue, Dan J. Donoghue
Abstract:
Methionine is an essential dietary amino acid necessary to promote growth and health of poultry. Synthetic methionine is commonly used as a supplement in conventional poultry diets and is temporarily allowed in organic poultry feed for lack of natural and organically approved sources of methionine. It has been a challenge to find a natural, sustainable and cost-effective source for methionine which reiterates the pressing need to explore potential alternatives of methionine for organic poultry production. Fish have high concentrations of methionine, but wild-caught fish are expensive and adversely impact wild fish populations. Asian carp (AC) is an invasive species and its utilization has the potential to be used as a natural methionine source. However, to our best knowledge, there is no proven technology to utilize this fish as a methionine source. In this study, we co-extruded Asian carp and soybean meal to form a dry-extruded, methionine-rich AC meal. In order to formulate rations with the novel extruded carp meal, the product was tested on cecectomized roosters for its amino acid digestibility and total metabolizable energy (TMEn). Excreta was collected and the gross energy, protein content of the feces was determined to calculate Total Metabolizable Energy (TME). The methionine content, digestibility and TME values were greater for the extruded AC meal than control diets. Carp meal was subsequently tested as a methionine source in feeds formulated for broilers, and production performance (body weight gain and feed conversion ratio) was assessed in comparison with broilers fed standard commercial diets supplemented with synthetic methionine. In this study, broiler chickens were fed either a control diet with synthetic methionine or a treatment diet with extruded AC meal (8 replicates/treatment; n=30 birds/replicate) from day 1 to 42 days of age. At the end of the trial, data for body weights, feed intake and feed conversion ratio (FCR) was analyzed using one-way ANOVA with Fisher LSD test for multiple comparisons. Results revealed that birds on AC diet had body weight gains and feed intake comparable to diets containing synthetic methionine (P > 0.05). Results from the study suggest that invasive AC-derived fish meal could potentially be an effective and inexpensive source of sustainable natural methionine for organic poultry farmers.Keywords: Asian carp, methionine, organic, poultry
Procedia PDF Downloads 158870 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers
Procedia PDF Downloads 148869 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 131868 Changing Employment Relations Practices in Hong Kong: Cases of Two Multinational Retail Banks since 1997
Authors: Teresa Shuk-Ching Poon
Abstract:
This paper sets out to examine the changing employment relations practices in Hong Kong’s retail banking sector over a period of more than 10 years. The major objective of the research is to examine whether and to what extent local institutional influences have overshadowed global market forces in shaping strategic management decisions and employment relations practices in Hong Kong, with a view to drawing implications to comparative employment relations studies. Examining the changing pattern of employment relations, this paper finds the industrial relations strategic choice model (Kochan, McKersie and Cappelli, 1984) appropriate to use as a framework for the study. Four broad aspects of employment relations are examined, including work organisation and job design; staffing and labour adjustment; performance appraisal, compensation and employee development; and labour unions and employment relations. Changes in the employment relations practices in two multinational retail banks operated in Hong Kong are examined in detail. The retail banking sector in Hong Kong is chosen as a case to examine as it is a highly competitive segment in the financial service industry very much susceptible to global market influences. This is well illustrated by the fact that Hong Kong was hit hard by both the Asian and the Global Financial Crises. This sector is also subject to increasing institutional influences, especially after the return of Hong Kong’s sovereignty to the People’s Republic of China (PRC) since 1997. The case study method is used as it is a suitable research design able to capture the complex institutional and environmental context which is the subject-matter to be examined in the paper. The paper concludes that operation of the retail banks in Hong Kong has been subject to both institutional and global market changes at different points in time. Information obtained from the two cases examined tends to support the conclusion that the relative significance of institutional as against global market factors in influencing retail banks’ operation and their employment relations practices is depended very much on the time in which these influences emerged and the scale and intensity of these influences. This case study highlights the importance of placing comparative employment relations studies within a context where employment relations practices in different countries or different regions/cities within the same country could be examined and compared over a longer period of time to make the comparison more meaningful.Keywords: employment relations, institutional influences, global market forces, strategic management decisions, retail banks, Hong Kong
Procedia PDF Downloads 402867 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 122866 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 336865 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques
Authors: Imed Feki, Faouzi Msahli
Abstract:
Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique
Procedia PDF Downloads 605864 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 147863 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms
Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan
Abstract:
With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves
Procedia PDF Downloads 188862 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153861 Critical Core Skills Profiling in the Singaporean Workforce
Authors: Bi Xiao Fang, Tan Bao Zhen
Abstract:
Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.Keywords: employability, skills profiling, skills measurement, soft skills
Procedia PDF Downloads 95860 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 363859 Radical Scavenging Activity of Protein Extracts from Pulse and Oleaginous Seeds
Authors: Silvia Gastaldello, Maria Grillo, Luca Tassoni, Claudio Maran, Stefano Balbo
Abstract:
Antioxidants are nowadays attractive not only for the countless benefits to the human and animal health, but also for the perspective of use as food preservative instead of synthetic chemical molecules. In this study, the radical scavenging activity of six protein extracts from pulse and oleaginous seeds was evaluated. The selected matrices are Pisum sativum (yellow pea from two different origins), Carthamus tinctorius (safflower), Helianthus annuus (sunflower), Lupinus luteus cv Mister (lupin) and Glycine max (soybean), since they are economically interesting for both human and animal nutrition. The seeds were grinded and proteins extracted from 20mg powder with a specific vegetal-extraction kit. Proteins have been quantified through Bradford protocol and scavenging activity was revealed using DPPH assay, based on radical DPPH (2,2-diphenyl-1-picrylhydrazyl) absorbance decrease in the presence of antioxidants molecules. Different concentrations of the protein extract (1, 5, 10, 50, 100, 500 µg/ml) were mixed with DPPH solution (DPPH 0,004% in ethanol 70% v/v). Ascorbic acid was used as a scavenging activity standard reference, at the same six concentrations of protein extracts, while DPPH solution was used as control. Samples and standard were prepared in triplicate and incubated for 30 minutes in dark at room temperature, the absorbance was read at 517nm (ABS30). Average and standard deviation of absorbance values were calculated for each concentration of samples and standard. Statistical analysis using t-students and p-value were performed to assess the statistical significance of the scavenging activity difference between the samples (or standard) and control (ABSctrl). The percentage of antioxidant activity has been calculated using the formula [(ABSctrl-ABS30)/ABSctrl]*100. The obtained results demonstrate that all matrices showed antioxidant activity. Ascorbic acid, used as standard, exhibits a 96% scavenging activity at the concentration of 500 µg/ml. At the same conditions, sunflower, safflower and yellow peas revealed the highest antioxidant performance among the matrices analyzed, with an activity of 74%, 68% and 70% respectively (p < 0.005). Although lupin and soybean exhibit a lower antioxidant activity compared to the other matrices, they showed a percentage of 46 and 36 respectively. All these data suggest the possibility to use undervalued edible matrices as antioxidants source. However, further studies are necessary to investigate a possible synergic effect of several matrices as well as the impact of industrial processes for a large-scale approach.Keywords: antioxidants, DPPH assay, natural matrices, vegetal proteins
Procedia PDF Downloads 433858 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 490