Search results for: time scale
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7580

Search results for: time scale

4580 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses

Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau

Abstract:

Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.

Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
4579 An Integrative Bayesian Approach to Supporting the Prediction of Protein-Protein Interactions: A Case Study in Human Heart Failure

Authors: Fiona Browne, Huiru Zheng, Haiying Wang, Francisco Azuaje

Abstract:

Recent years have seen a growing trend towards the integration of multiple information sources to support large-scale prediction of protein-protein interaction (PPI) networks in model organisms. Despite advances in computational approaches, the combination of multiple “omic" datasets representing the same type of data, e.g. different gene expression datasets, has not been rigorously studied. Furthermore, there is a need to further investigate the inference capability of powerful approaches, such as fullyconnected Bayesian networks, in the context of the prediction of PPI networks. This paper addresses these limitations by proposing a Bayesian approach to integrate multiple datasets, some of which encode the same type of “omic" data to support the identification of PPI networks. The case study reported involved the combination of three gene expression datasets relevant to human heart failure (HF). In comparison with two traditional methods, Naive Bayesian and maximum likelihood ratio approaches, the proposed technique can accurately identify known PPI and can be applied to infer potentially novel interactions.

Keywords: Bayesian network, Classification, Data integration, Protein interaction networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
4578 Defining a Semantic Web-based Framework for Enabling Automatic Reasoning on CIM-based Management Platforms

Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano

Abstract:

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping provides CIM diagrams with precise semantics and can be used for automatic reasoning about the management information models, as a design aid, by means of newgeneration CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

Keywords: CIM, Knowledge-based Information Models, OntologyLanguages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
4577 The Search of Anomalous Higgs Boson Couplings at the Large Hadron Electron Collider and Future Circular Electron Hadron Collider

Authors: Ilkay Turk Cakir, Murat Altinli, Zekeriya Uysal, Abdulkadir Senol, Olcay Bolukbasi Yalcinkaya, Ali Yilmaz

Abstract:

The Higgs boson was discovered by the ATLAS and CMS experimental groups in 2012 at the Large Hadron Collider (LHC). Production and decay properties of the Higgs boson, Standard Model (SM) couplings, and limits on effective scale of the Higgs boson’s couplings with other bosons are investigated at particle colliders. Deviations from SM estimates are parametrized by effective Lagrangian terms to investigate Higgs couplings. This is a model-independent method for describing the new physics. In this study, sensitivity to neutral gauge boson anomalous couplings with the Higgs boson is investigated using the parameters of the Large Hadron electron Collider (LHeC) and the Future Circular electron-hadron Collider (FCC-eh) with a model-independent approach. By using MadGraph5_aMC@NLO multi-purpose event generator with the parameters of LHeC and FCC-eh, the bounds on the anomalous Hγγ, HγZ and HZZ couplings in e− p → e− q H process are obtained. Detector simulations are also taken into account in the calculations.

Keywords: Anomalous Couplings, Effective Lagrangian, Electron-Proton Colliders, Higgs Boson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 852
4576 Consumer Perception of 3D Body Scanning While Online Shopping for Clothing

Authors: A. Grilec, S. Petrak, M. Mahnic Naglic

Abstract:

Technological development and the globalization in production and sales of clothing in the last decade have significantly influenced the changes in consumer relationship with the industrial-fashioned apparel and in the way of clothing purchasing. The Internet sale of clothing is in a constant and significant increase in the global market, but the possibilities offered by modern computing technologies in the customization segment are not yet fully involved, especially according to the individual customer requirements and body sizes. Considering the growing trend of online shopping, the main goal of this paper is to investigate the differences in customer perceptions towards online apparel shopping and particularly to discover the main differences in perceptions between customers regarding three different body sizes. In order to complete the research goal, the quantitative study on the sample of 85 Croatian consumers was conducted in 2017 in Zagreb, Croatia. Respondents were asked to indicate their level of agreement according to a five-point Likert scale ranging from strongly disagree (1) to strongly agree (5). To analyze attitudes of respondents, simple and descriptive statistics were used. The main findings highlight the differences in respondent perception of 3D body scanning, using 3D body scanning in Internet shopping, online apparel shopping habits regarding their body sizes.

Keywords: Consumer behavior, online shopping, 3D body scanning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 735
4575 An Improved Tie Force Method for Progressive Collapse Resistance of Precast Concrete Cross Wall Structures

Authors: M. Tohidi, J. Yang, C. Baniotopoulos

Abstract:

Progressive collapse of buildings typically occurs  when abnormal loading conditions cause local damages, which leads  to a chain reaction of failure and ultimately catastrophic collapse. The  tie force (TF) method is one of the main design approaches for  progressive collapse. As the TF method is a simplified method, further  investigations on the reliability of the method is necessary. This study  aims to develop an improved TF method to design the cross wall  structures for progressive collapse. To this end, the pullout behavior of  strands in grout was firstly analyzed; and then, by considering the tie  force-slip relationship in the friction stage together with the catenary  action mechanism, a comprehensive analytical method was developed.  The reliability of this approach is verified by the experimental results  of concrete block pullout tests and full scale floor-to-floor joints tests  undertaken by Portland Cement Association (PCA). Discrepancies in  the tie force between the analytical results and codified specifications  have suggested the deficiency of TF method, hence an improved  model based on the analytical results has been proposed to address this  concern.

 

Keywords: Cross wall, progressive collapse, ties force method, catenary, analytical.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3678
4574 Scale Development for Measuring E-Service Quality in Banking

Authors: Vivek Agrawal, Vikas Tripathi, Nitin Seth

Abstract:

This study examines several critical dimensions of eservice quality overlooked in the existing literature and proposes a model and instrument framework for measuring customer perceived e-service quality in the banking sector. The initial design was derived from a pool of instrument dimensions and their items from the existing literature review by content analysis. Based on focused group discussion, nine dimensions were extracted. An exploratory factor analysis approach was applied to data from a survey of 323 respondents. The instrument has been designed specifically for the banking sector. Research data was collected from bank customers who use electronic banking in a developing economy. A nine-factor instrument has been proposed to measure the e-service quality. The instrument has been checked for reliability. The validity and sample place limited the applicability of the instrument across economies and service categories. Future research must be conducted to check the validity. This instrument can help bankers in developing economies like India to measure the e-service quality and make improvements. The present study offers a systematic procedure that provides insights on to the conceptual and empirical comprehension of customer perceived e-service quality and its constituents.

Keywords: Testing, instrument, e-service quality, factor analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3826
4573 Modeling of Nitrogen Solubility in Stainless Steel

Authors: Saeed Ghali, Hoda El-Faramawy, Mamdouh Eissa, Michael Mishreky

Abstract:

Scale-resistant austenitic stainless steel, X45CrNiW 18-9, has been developed, and modified steels produced through partial and total nickel replacement by nitrogen. These modified steels were produced in a 10 kg induction furnace under different nitrogen pressures and were cast into ingots. The produced modified stainless steels were forged, followed by air cooling. The phases of modified stainless steels have been investigated using the Schaeffler diagram, dilatometer, and microstructure observations. Both partial and total replacements of nickel using 0.33-0.50% nitrogen are effective in producing fully austenitic stainless steels. The nitrogen contents were determined and compared with those calculated using the Institute of Metal Science (IMS) equation. The results showed great deviations between the actual nitrogen contents and predicted values through IMS equation. So, an equation has been derived based on chemical composition, pressure, and temperature at 1600 oC: [N%] = 0.0078 + 0.0406*X, where X is a function of chemical composition and nitrogen pressure. The derived equation has been used to calculate the nitrogen content of different steels using published data. The results reveal the difficulty of deriving a general equation for the prediction of nitrogen content covering different steel compositions. So, it is necessary to use a narrow composition range.

Keywords: Solubility, nitrogen, stainless steel, Schaeffler.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41
4572 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: Deep learning, convolutional neural network, LSTM, housing prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4977
4571 A Review on Factors Influencing Implementation of Secure Software Development Practices

Authors: Sri Lakshmi Kanniah, Mohd Naz’ri Mahrin

Abstract:

More and more businesses and services are depending on software to run their daily operations and business services. At the same time, cyber-attacks are becoming more covert and sophisticated, posing threats to software. Vulnerabilities exist in the software due to the lack of security practices during the phases of software development. Implementation of secure software development practices can improve the resistance to attacks. Many methods, models and standards for secure software development have been developed. However, despite the efforts, they still come up against difficulties in their deployment and the processes are not institutionalized. There is a set of factors that influence the successful deployment of secure software development processes. In this study, the methodology and results from a systematic literature review of factors influencing the implementation of secure software development practices is described. A total of 44 primary studies were analysed as a result of the systematic review. As a result of the study, a list of twenty factors has been identified. Some of factors that affect implementation of secure software development practices are: Involvement of the security expert, integration between security and development team, developer’s skill and expertise, development time and communication between stakeholders. The factors were further classified into four categories which are institutional context, people and action, project content and system development process. The results obtained show that it is important to take into account organizational, technical and people issues in order to implement secure software development initiatives.

Keywords: Secure software development, software development, software security, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2476
4570 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
4569 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116
4568 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: Seismic slope stability, sliding displacement, mainshock, aftershock, landslide, earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
4567 Near Shore Wave Manipulation for Electricity Generation

Authors: K. D. R. Jagath-Kumara, D. D. Dias

Abstract:

The sea waves carry thousands of GWs of power globally. Although there are a number of different approaches to harness offshore energy, they are likely to be expensive, practically challenging, and vulnerable to storms. Therefore, this paper considers using the near shore waves for generating mechanical and electrical power. It introduces two new approaches, the wave manipulation and using a variable duct turbine, for intercepting very wide wave fronts and coping with the fluctuations of the wave height and the sea level, respectively. The first approach effectively allows capturing much more energy yet with a much narrower turbine rotor. The second approach allows using a rotor with a smaller radius but captures energy of higher wave fronts at higher sea levels yet preventing it from totally submerging. To illustrate the effectiveness of the first approach, the paper contains a description and the simulation results of a scale model of a wave manipulator. Then, it includes the results of testing a physical model of the manipulator and a single duct, axial flow turbine in a wave flume in the laboratory. The paper also includes comparisons of theoretical predictions, simulation results, and wave flume tests with respect to the incident energy, loss in wave manipulation, minimal loss, brake torque, and the angular velocity.

Keywords: Near-shore sea waves, Renewable energy, Wave energy conversion, Wave manipulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
4566 A Stochastic Analytic Hierarchy Process Based Weighting Model for Sustainability Measurement in an Organization

Authors: Faramarz Khosravi, Gokhan Izbirak

Abstract:

A weighted statistical stochastic based Analytical Hierarchy Process (AHP) model for modeling the potential barriers and enablers of sustainability for measuring and assessing the sustainability level is proposed. For context-dependent potential barriers and enablers, the proposed model takes the basis of the properties of the variables describing the sustainability functions and was developed into a realistic analytical model for the sustainable behavior of an organization. This thus serves as a means for measuring the sustainability of the organization. The main focus of this paper was the application of the AHP tool in a statistically-based model for measuring sustainability. Hence a strong weighted stochastic AHP based procedure was achieved. A case study scenario of a widely reported major Canadian electric utility was adopted to demonstrate the applicability of the developed model and comparatively examined its results with those of an equal-weighted model method. Variations in the sustainability of a company, as fluctuations, were figured out during the time. In the results obtained, sustainability index for successive years changed form 73.12%, 79.02%, 74.31%, 76.65%, 80.49%, 79.81%, 79.83% to more exact values 73.32%, 77.72%, 76.76%, 79.41%, 81.93%, 79.72%, and 80,45% according to priorities of factors that have found by expert views, respectively. By obtaining relatively necessary informative measurement indicators, the model can practically and effectively evaluate the sustainability extent of any organization and also to determine fluctuations in the organization over time.

Keywords: AHP, sustainability fluctuation, environmental indicators, performance measurement, environmental sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
4565 Towards the Use of Renewable Energy Sources in the Home

Authors: Adriana Alexandru, Elena Jitaru, Rayner Mayer

Abstract:

The paper presents the results of the European EIE project “Realising the potential for small scale renewable energy sources in the home – Kyotointhehome". The project's global aim is to inform and educate teachers, students and their families so that they can realise the need and can assess the potential for energy efficiency (EE) measures and renewable energy sources (RES) in their homes. The project resources were translated and trialled by 16 partners in 10 European countries. A web-based methodology which will enable families to assess how RES can be incorporated into energy efficient homes was accomplished. The web application “KYOTOINHOME" will help the citizens to identify what they can do to help their community meet the Kyoto target for greenhouse gas reductions and prevent global warming. This application provides useful information on how the citizens can use renewable energy sources in their home to provide space heating and cooling, hot water and electricity. A methodology for assessing heat loss in a dwelling and application of heat pump system was elaborated and will be implemented this year. For schools, we developed a set of practical activities concerned with preventing climate change through using renewable energy sources. Complementary resources will also developed in the Romanian research project “Romania Contribution to the European Targets Regarding the Development of Renewable Energy Sources" - PROMES.

Keywords: Education, energy policy, Internet, renewable energy sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
4564 Solving Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms – Part II: Optimization

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
4563 Design and Application of NFC-Based Identity and Access Management in Cloud Services

Authors: Shin-Jer Yang, Kai-Tai Yang

Abstract:

In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.

Keywords: Cloud service, multi-tenancy, NFC, IAM, mobile device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1112
4562 Calibration of Syringe Pumps Using Interferometry and Optical Methods

Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins

Abstract:

Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.

Keywords: Calibration, interferometry, syringe pump, optical method, uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
4561 Animal-Assisted Therapy for Persons with Disabilities Based on Canine Tail Language Interpretation via Gaussian-Trapezoidal Fuzzy Emotional Behavior Model

Authors: W. Phanwanich, O. Kumdee, P. Ritthipravat, Y. Wongsawat

Abstract:

In order to alleviate the mental and physical problems of persons with disabilities, animal-assisted therapy (AAT) is one of the possible modalities that employs the merit of the human-animal interaction. Nevertheless, to achieve the purpose of AAT for persons with severe disabilities (e.g. spinal cord injury, stroke, and amyotrophic lateral sclerosis), real-time animal language interpretation is desirable. Since canine behaviors can be visually notable from its tail, this paper proposes the automatic real-time interpretation of canine tail language for human-canine interaction in the case of persons with severe disabilities. Canine tail language is captured via two 3-axis accelerometers. Directions and frequencies are selected as our features of interests. The novel fuzzy rules based on Gaussian-Trapezoidal model and center of gravity (COG)-based defuzzification method are proposed in order to interpret the features into four canine emotional behaviors, i.e., agitate, happy, scare and neutral as well as its blended emotional behaviors. The emotional behavior model is performed in the simulated dog and has also been evaluated in the real dog with the perfect recognition rate.

Keywords: Animal-assisted therapy (AAT), Persons with disabilities, Canine tail language, Fuzzy emotional behavior model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
4560 Experimental Investigation on Effect of Different Heat Treatments on Phase Transformation and Superelasticity of NiTi Alloy

Authors: Erfan Asghari Fesaghandis, Reza Ghaffari Adli, Abbas Kianvash, Hossein Aghajani, Homa Homaie

Abstract:

NiTi alloys possess magnificent superelastic, shape memory, high strength and biocompatible properties. For improving mechanical properties, foremost, superelasticity behavior, heat treatment process is carried out. In this paper, two different heat treatment methods were undertaken: (1) solid solution, and (2) aging. The effect of each treatment in a constant time is investigated. Five samples were prepared to study the structure and optimize mechanical properties under different time and temperature. For measuring the upper plateau stress, lower plateau stress and residual strain, tensile test is carried out. The samples were aged at two different temperatures to see difference between aging temperatures. The sample aged at 500 °C has a bigger crystallite size and lower amount of Ni which causes the mentioned sample to possess poor pseudo elasticity behaviour than the other aged sample. The sample aged at 460 °C has shown remarkable superelastic properties. The mentioned sample’s higher plateau is 580 MPa with the lowest residual strain (0.17%) while other samples have possessed higher residual strains. X-ray diffraction was used to investigate the produced phases.

Keywords: Heat treatment, phase transformation, superelasticity, NiTi alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
4559 The Appropriate Time Required for Newborn Calf Camel to Get Optimal Amount of Colostrums Immunoglobulin (IgG) with Relation to Levels of Cortisol and Thyroxin

Authors: Amina M. Bishr, Ahmed B. Magdub, Abdul-Baset R. Abuzweda

Abstract:

A major challenge in camel productivity is the high mortality rate of camel calves in the early stage due to the lack of colostrums. This study investigates the time required for the calves to obtain the optimum amount of the immunoglobulin (IgG). Eleven pregnant female camels (Camelus Dromedarus) were selected randomly and variant in age and gestation. After delivery, 7 calves were obtained and used for this investigation. Colostrum samples were collected from mothers immediately after parturition. Blood samples were obtained from the calves as follow: 0 day (before suckling), 24, 48, 72, 96, 120 and 144 hours, 2nd, 3rd, and 4th weeks post suckling. Blood serum and colostrums whey were separated and used to determine IgG concentration, total protein and concentration of Cortisol and Thyroxin. The results showed high levels of IgG in camel colostrums (328.8 ± 4.5 mg / ml). The IgG concentration in serum of calves was the highest within 1st 24 h after suckling (140.75 mg /ml), and then declined gradually reached lower level at 144 h (41.97 mg / ml). The average turnover rate (t 1/2) of serum IgG in the all cases was 3.22 days. The turnover of ranged from 2.56 days for calves have values of IgG more than average and 7.7 days for those with values below average. In spite of very high levels of thyroxin in sera of new born the results showed no correlation between cortisol and thyroxin with IgG levels.

Keywords: Camel, cortisol, IgG, thyroxin, turn-over rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
4558 A Simulation Tool for Projection Mapping Based on Mapbox and Unity

Authors: Noriko Hanakawa, Masaki Obana

Abstract:

A simulation tool is proposed for big-scale projection mapping events. The tool has four main functions based on Mapbox and Unity utilities. The first function is building three-dimensional models of real cities using Mapbox. The second function is movie projections to some buildings in real cities using Unity. The third is a movie sending function from a PC to a virtual projector. The fourth function is mapping movies with fitting buildings. The simulation tool was adapted to a real projection mapping event held in 2019. The event completed, but it faced a severe problem in the movie projection to the target building. Extra tents were set in front of the target building, and the tents became obstacles to the movie projection. The simulation tool developed herein could reconstruct the problems of the event. Therefore, if the simulation tool was developed before the 2019 projection mapping event, the problem of the tents being obstacles could have been avoided using the tool. Moreover, we confirmed that the simulation tool is useful for planning future projection mapping events to avoid various extra equipment obstacles, such as utility poles, planting trees, and monument towers.

Keywords: avoiding obstacles, projection mapping, projector position, real 3D map

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
4557 Self-Perceived Employability of Students of International Relations of University of Warmia and Mazury in Poland

Authors: Marzena Świgoń

Abstract:

Nowadays, graduates should be prepared for serious challenges in the internal and external labor market. The notion that a degree is a “passport to employment” has been relegated to the past. In the last few years a phenomenon in the form of the increasing unemployment of highly educated young people in EU countries, including Poland has been observed. Empirical studies were conducted among Polish students in the scope of the so-called self-perceived employability review. In this study, a special scale was used which consisted of 19 statements regarding five components: student’s perception of university; field of study; self-belief; state of the external labor market; and, personal knowledge management. The respondent group consisted of final-year master’s students of International Relations at the University of Warmia and Mazury in Olsztyn, Poland. The findings of the empirical studies were compiled using statistical methods: descriptive statistics and inferential statistics. In general, in light of the conducted studies, the self-perceived employability of the Polish students was not high. Limitations of the studies were discussed, as well as the implications for future research in the scope of the students’ employability.

Keywords: Self-perceived employability, students of international relations, university education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
4556 The Relationship between Conceptual Organizational Culture and the Level of Tolerance in Employees

Authors: M. Sadoughi, R. Ehsani

Abstract:

The aim of the present study is examining the relationship between conceptual organizational culture and the level of tolerance in employees of Islamic Azad University of Shahre Ghods. This research is a correlational and analytic-descriptive one. The samples included 144 individuals. A 24-item standard questionnaire of organizational culture by Cameron and Queen was used in this study. This questionnaire has six criteria and each criterion includes four items that each item indicates one cultural dimension. Reliability coefficient of this questionnaire was normed using Cronbach's alpha of 0.91. Also, the 25-item questionnaire of tolerance by Conor and Davidson was used. This questionnaire is in a five-degree Likert scale form. It has seven criteria and is designed to measure the power of coping with pressure and threat. It has the needed content reliability and its reliability coefficient is normed using Cronbach's alpha of 0.87. Data were analyzed using Pearson correlation coefficient and multivariable regression. The results showed among various dimensions of organizational culture, there is a positive significant relationship between three dimensions (family, adhocracy, bureaucracy) and tolerance, there is a negative significant relationship between dimension of market and tolerance and components of organizational culture have the power of prediction and explaining the tolerance. In this explanation, the component of family is the most effective and the best predictor of tolerance.

Keywords: Adhocracy, bureaucracy, organizational culture, tolerance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025
4555 Experimental Investigations on the Mechanism of Stratified Liquid Mixing in a Cylinder

Authors: Chai Mingming, Li Lei, Lu Xiaoxia

Abstract:

In this paper, the mechanism of stratified liquids’ mixing in a cylinder is investigated. It is focused on the effects of Rayleigh-Taylor Instability (RTI) and rotation of the cylinder on liquid interface mixing. For miscible liquids, Planar Laser Induced Fluorescence (PLIF) technique is applied to record the concentration field for one liquid. Intensity of Segregation (IOS) is used to describe the mixing status. For immiscible liquids, High Speed Camera is adopted to record the development of the interface. The experiment of RTI indicates that it plays a great role in the mixing process, and meanwhile the large-scale mixing is triggered, and subsequently the span of the stripes decreases, showing that the mesoscale mixing is coming into being. The rotation experiments show that the spin-down process has a great role in liquid mixing, during which the upper liquid falls down rapidly along the wall and crashes into the lower liquid. During this process, a lot of interface instabilities are excited. Liquids mix rapidly in the spin-down process. It can be concluded that no matter what ways have been adopted to speed up liquid mixing, the fundamental reason is the interface instabilities which increase the area of the interface between liquids and increase the relative velocity of the two liquids.

Keywords: Interface instability, liquid mixing, Rayleigh-Taylor Instability, spin-down process, spin-up process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
4554 Artificial Intelligent Approach for Machining Titanium Alloy in a Nonconventional Process

Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama

Abstract:

Artificial neural networks (ANN) are used in distinct researching fields and professions, and are prepared by cooperation of scientists in different fields such as computer engineering, electronic, structure, biology and so many different branches of science. Many models are built correlating the parameters and the outputs in electrical discharge machining (EDM) concern for different types of materials. Up till now model for Ti-5Al-2.5Sn alloy in the case of electrical discharge machining performance characteristics has not been developed. Therefore, in the present work, it is attempted to generate a model of material removal rate (MRR) for Ti-5Al-2.5Sn material by means of Artificial Neural Network. The experimentation is performed according to the design of experiment (DOE) of response surface methodology (RSM). To generate the DOE four parameters such as peak current, pulse on time, pulse off time and servo voltage and one output as MRR are considered. Ti-5Al-2.5Sn alloy is machined with positive polarity of copper electrode. Finally the developed model is tested with confirmation test. The confirmation test yields an error as within the agreeable limit. To investigate the effect of the parameters on performance sensitivity analysis is also carried out which reveals that the peak current having more effect on EDM performance.

Keywords: Ti-5Al-2.5Sn, material removal rate, copper tungsten, positive polarity, artificial neural network, multi-layer perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
4553 A Multiple-Objective Environmental Rationalization and Optimization for Material Substitution in the Production of Stone-Washed Jeans- Garments

Authors: Nabil A. Ibrahim, Nabil M. Abdel Moneim, Mohamed A. Ramadan, Marwa M. Hosni

Abstract:

As the Textile Industry is the second largest industry in Egypt and as small and medium-sized enterprises (SMEs) make up a great portion of this industry therein it is essential to apply the concept of Cleaner Production for the purpose of reducing pollution. In order to achieve this goal, a case study concerned with ecofriendly stone-washing of jeans-garments was investigated. A raw material-substitution option was adopted whereby the toxic potassium permanganate and sodium sulfide were replaced by the environmentally compatible hydrogen peroxide and glucose respectively where the concentrations of both replaced chemicals together with the operating time were optimized. In addition, a process-rationalization option involving four additional processes was investigated. By means of criteria such as product quality, effluent analysis, mass and heat balance; and cost analysis with the aid of a statistical model, a process optimization treatment revealed that the superior process optima were 50%, 0.15% and 50min for H2O2 concentration, glucose concentration and time, respectively. With these values the superior process ought to reduce the annual cost by about EGP 105 relative to the currently used conventional method.

Keywords: Cleaner Production, Eco-friendly of jeans garments, Stone washing, Textile Industry, Textile Wet Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
4552 Research of Strong-Column-Weak-Beam Criteria of Reinforced Concrete Frames Subjected to Biaxial Seismic Excitation

Authors: Chong Zhang, Mu-Xuan Tao

Abstract:

In several earthquakes, numerous reinforced concrete (RC) frames subjected to seismic excitation demonstrated a collapse pattern characterized by column hinges, though designed according to the Strong-Column-Weak-Beam (S-C-W-B) criteria. The effect of biaxial seismic excitation on the disparity between design and actual performance is carefully investigated in this article. First, a modified load contour method is proposed to derive a closed-form equation of biaxial bending moment strength, which is verified by numerical and experimental tests. Afterwards, a group of time history analyses of a simple frame modeled by fiber beam-column elements subjected to biaxial seismic excitation are conducted to verify that the current S-C-W-B criteria are not adequate to prevent the occurrence of column hinges. A biaxial over-strength factor is developed based on the proposed equation, and the reinforcement of columns is appropriately amplified with this factor to prevent the occurrence of column hinges under biaxial excitation, which is proved to be effective by another group of time history analyses.

Keywords: Biaxial bending moment strength, biaxial seismic excitation, fiber beam-column model, load contour method, strong-column-weak-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
4551 CFD Parametric Study of Mixers Performance

Authors: Mikhail Strongin

Abstract:

The mixing of two or more liquids is very common in many industrial applications from automotive to food processing. CFD simulations of these processes require comparison with test results. In many cases it is practically impossible. Therefore, comparison provides with scalable tests.  So, parameterization of the problem is sufficient to capture the performance of the mixer.

However, the influence of geometrical and thermo-physical parameters on the mixing is not well understood.

In this work influence of geometrical and thermal parameters was studied. It was shown that for full developed turbulent flows (Re > 104), Pet»const and concentration of secondary fluid ~ F(r/l).

In other words, the mixing is practically independent of total flow rate and scale for a given geometry and ratio of flow rates of mixing flows. This statement was proved in present work for different geometries and mixtures such as EGR and water-urea mixture.

Present study has been shown that the best way to improve the mixing is to establish geometry with the lowest Pet number possible by intensifying the turbulence in the domain. This is achievable by using step geometry, impinging flow EGR on a wall, or EGR jets, with a strong change in the flow direction, or using swirler like flow in the domain or combination all of these factors. All of these results are applicable to any mixtures of no compressible fluids.  

Keywords: CFD, mixing, fluids, parameterization, scalability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951