Search results for: commuter line vending machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5281

Search results for: commuter line vending machine

1591 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach

Authors: Shital Suresh Borse, Vijayalaxmi Kadroli

Abstract:

E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.

Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN

Procedia PDF Downloads 96
1590 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa

Abstract:

In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air quality, modeling pollution, particulate matter, remote sensing

Procedia PDF Downloads 139
1589 Consumers of Counterfeit Goods and the Role of Context: A Behavioral Perspective of the Process

Authors: Carla S. C. da Silva, Cristiano Coelho, Junio Souza

Abstract:

The universe of luxury has charmed and seduced consumers for centuries. Since the middle ages, their symbols are displayed as objects of power and status, arousing desire and provoking social covetousness. In this way, the counterfeit market is growing every day, offering a group of consumers the opportunity to enter into a distinct social position, where the beautiful and shiny brand logo signals an inclusion passport to everything this group wants. This work sought to investigate how the context and the social environment can influence consumers to choose products of symbolic brands even if they are not legitimate and how this behavior is accepted in society. The study proposed: a) to evaluate the measures of knowledge and quality of a set of marks presented in the manipulation of two contexts (luxury x academic) between buyers and non-buyers of forgeries, both for original products and their correspondence with counterfeit products; b) measure the effect of layout on the verbal responses of buyers and non-buyers in relation to their assessment of the behavior of buyers of counterfeits. The present study, in addition to measuring the level of knowledge and quality attributed to each brand investigated, also verified the willingness of consumers to pay for a falsified good of the brands of predilection compared to the original study. This data can serve as a parameter for luxury brand managers in their counterfeit coping strategies. The investigation into the frequency of purchase has shown that those who buy counterfeit goods do so regularly, and there is a propensity to repeat the purchase. It was noted that a significant majority of buyers of counterfeits are prone to invest in illegality to meet their expectations of being in line with the standards of their interest groups.

Keywords: luxury, consumers, counterfeits, context, behaviorism

Procedia PDF Downloads 264
1588 Exploring the Process of Cultivating Tolerance: The Case of a Pakistani University

Authors: Uzma Rashid, Mommnah Asad

Abstract:

As more and more people fall victim to the intolerance that has become a plague globally, academicians are faced with the herculean task of sowing the roots for more tolerant individuals. Being the multilayered task that it is, promoting an acceptance of diversity and pushing an agenda to push back hate requires efforts on multiple levels. Not only does the curriculum need to be in line with such goals, but teachers also need to be trained to cater to the sensitivities surrounding conversations of tolerance and diversity. In addition, institutional support needs to be there to provide conducive conditions for a diversity driven learning process to take place. In reality, teachers have to struggle with forwarding ideas about diversity and tolerance which do not sound particularly risky to be shared but given the current socio-political and religious milieu, can put the teacher in a difficult position and can make the task exponentially challenging. This paper is based on an auto-ethnographic account of teaching undergraduate and graduate courses at a private university in Pakistan. These courses were aimed at teaching tolerance to adult learners through classes focused on key notions pertaining to religion, culture, gender, and society. Authors’ classroom experiences with the students in these courses indicate a marked heightening of religious sensitivities that can potentially threaten a teacher’s life chances and become a hindrance in deep, meaningful conversations, thus lending a superficiality to the whole endeavor. The paper will discuss in detail the challenges that this teacher dealt with in the process, how those were addressed, and locate them in the larger picture of how tolerance can be materialized in current times in the universities in Pakistan and in similar contexts elsewhere.

Keywords: tolerance, diversity, gender, Pakistani Universities

Procedia PDF Downloads 142
1587 Synergistic Effect of Doxorubicin-Loaded Silver Nanoparticles – Polymeric Conjugates on Breast Cancer Cells

Authors: Nancy M. El-Baz, Laila Ziko, Rania Siam, Wael Mamdouh

Abstract:

Cancer is one of the most devastating diseases, and has over than 10 million new cases annually worldwide. Despite the effectiveness of chemotherapeutic agents, their systemic toxicity and non-selective anticancer actions represent the main obstacles facing cancer curability. Due to the effective enhanced permeability and retention (EPR) effect of nanomaterials, nanoparticles (NPs) have been used as drug nanocarriers providing targeted cancer drug delivery systems. In addition, several inorganic nanoparticles such as silver (AgNPs) nanoparticles demonstrated a potent anticancer activity against different cancers. The present study aimed at formulating core-shell inorganic NPs-based combinatorial therapy based on combining the anticancer activity of AgNPs along with doxorubicin (DOX) and evaluating their cytotoxicity on MCF-7 breast cancer cells. These inorganic NPs-based combinatorial therapies were designed to (i) Target and kill cancer cells with high selectivity, (ii) Have an improved efficacy/toxicity balance, and (iii) Have an enhanced therapeutic index when compared to the original non-modified DOX with much lower dosage The in-vitro cytotoxicity studies demonstrated that the NPs-based combinatorial therapy achieved the same efficacy of non-modified DOX on breast cancer cell line, but with 96% reduced dose. Such reduction in DOX dose revealed that the combination between DOX and NPs possess a synergic anticancer activity against breast cancer. We believe that this is the first report on a synergic anticancer effect at very low dose of DOX against MCF-7 cells. Future studies on NPs-based combinatorial therapy may aid in formulating novel and significantly more effective cancer therapeutics.

Keywords: nanoparticles-based combinatorial therapy, silver nanoparticles, doxorubicin, breast cancer

Procedia PDF Downloads 418
1586 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform

Authors: Jie Zhao, Meng Su

Abstract:

Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.

Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab

Procedia PDF Downloads 61
1585 Intensification of Process Kinetics for Conversion of Organic Volatiles into Syngas Using Non-Thermal Plasma

Authors: Palash Kumar Mollick, Leire Olazar, Laura Santamaria, Pablo Comendador, Manomita Mollick, Gartzen Lopez, Martin Olazar

Abstract:

The entire world is skeptical towards a silver line technology of converting plastic waste into valuable synthetic gas. At this junction, besides an adequately studied conventional catalytic process for steam reforming, a non-thermal plasma is being introduced. Organic volatiles are produced in the first step, pyrolysing the plastic materials. Resultant lightweight olefins and carbon monoxide are the major components that undergo a steam reforming process to achieve syngas. A non-thermal plasma consists of ionized gases and free electrons with an electronic temperature as high as 10³ K. Organic volatiles are, in general, endorganics inactive and thus demand huge bond-breaking energy. Conventional catalyst is incapable of providing the required activation energy, leading to poor thermodynamic equilibrium, whereas a non-thermal plasma can actively collide with reactants to produce a rich mix of reactive species, including vibrationally or electronically excited molecules, radicals, atoms, and ions. In addition, non-thermal plasma provides nonequilibrium conditions leading to electric discharge only in certain degrees of freedom without affecting the intrinsic chemical conditions of the participating reactants and products. In this work, we report thermodynamic and kinetic aspects of the conversion of organic volatiles into syngas using a non-thermal plasma. Detailed characteristics of plasma and its effect on the overall yield of the process will be presented.

Keywords: non thermal plasma, plasma catalysis, steam reforming, syngas, plastic waste, green energy

Procedia PDF Downloads 44
1584 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring

Authors: A. Degale Desta, Cheng Jian

Abstract:

Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.

Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning

Procedia PDF Downloads 141
1583 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components

Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler

Abstract:

Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.

Keywords: case study, internet of things, predictive maintenance, reference architecture

Procedia PDF Downloads 228
1582 Design of Microwave Building Block by Using Numerical Search Algorithm

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.

Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.

Procedia PDF Downloads 362
1581 Effect of Kinesio Taping on Anaerobic Power and Maximum Oxygen Consumption after Eccentric Exercise

Authors: Disaphon Boobpachat, Nuttaset Manimmanakorn, Apiwan Manimmanakorn, Worrawut Thuwakum, Michael J. Hamlin

Abstract:

Objectives: To evaluate effect of kinesio tape compared to placebo tape and static stretching on recovery of anaerobic power and maximal oxygen uptake (Vo₂max) after intensive exercise. Methods: Thirty nine untrained healthy volunteers were randomized to 3 groups for each intervention: elastic tape, placebo tape and stretching. The participants performed intensive exercise on the dominant quadriceps by using isokinetic dynamometry machine. The recovery process was evaluated by creatine kinase (CK), pressure pain threshold (PPT), muscle soreness scale (MSS), maximum voluntary contraction (MVC), jump height, anaerobic power and Vo₂max at baseline, immediately post-exercise and post-exercise day 1, 2, 3 and 7. Results: The kinesio tape, placebo tape and stretching groups had significant changes of PPT, MVC, jump height at immediately post-exercise compared to baseline (p < 0.05), and changes of MSS, CK, anaerobic power and Vo₂max at day 1 post-exercise compared to baseline (p < 0.05). There was no significant difference of those outcomes among three groups. Additionally, all experimental groups had little effects on anaerobic power and Vo₂max compared to baseline and compared among three groups (p > 0.05). Conclusion: Kinesio tape and stretching did not improve recovery of anaerobic power and Vo₂max after eccentric exercise compared to placebo tape.

Keywords: stretching, eccentric exercise, Wingate test, muscle soreness

Procedia PDF Downloads 116
1580 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 236
1579 Electroencephalography (EEG) Analysis of Alcoholic and Control Subjects Using Multiscale Permutation Entropy

Authors: Lal Hussain, Wajid Aziz, Sajjad Ahmed Nadeem, Saeed Arif Shah, Abdul Majid

Abstract:

Brain electrical activity as reflected in Electroencephalography (EEG) have been analyzed and diagnosed using various techniques. Among them, complexity measure, nonlinearity, disorder, and unpredictability play vital role due to the nonlinear interconnection between functional and anatomical subsystem emerged in brain in healthy state and during various diseases. There are many social and economical issues of alcoholic abuse as memory weakness, decision making, impairments, and concentrations etc. Alcoholism not only defect the brains but also associated with emotional, behavior, and cognitive impairments damaging the white and gray brain matters. A recently developed signal analysis method i.e. Multiscale Permutation Entropy (MPE) is proposed to estimate the complexity of long-range temporal correlation time series EEG of Alcoholic and Control subjects acquired from University of California Machine Learning repository and results are compared with MSE. Using MPE, coarsed grained series is first generated and the PE is computed for each coarsed grained time series against the electrodes O1, O2, C3, C4, F2, F3, F4, F7, F8, Fp1, Fp2, P3, P4, T7, and T8. The results computed against each electrode using MPE gives higher significant values as compared to MSE as well as mean rank differences accordingly. Likewise, ROC and Area under the ROC also gives higher separation against each electrode using MPE in comparison to MSE.

Keywords: electroencephalogram (EEG), multiscale permutation entropy (MPE), multiscale sample entropy (MSE), permutation entropy (PE), mann whitney test (MMT), receiver operator curve (ROC), complexity measure

Procedia PDF Downloads 473
1578 On Lie-Central Derivations and Almost Inner Lie-Derivations of Leibniz Algebras

Authors: Natalia Pacheco Rego

Abstract:

The Liezation functor is a map from the category of Leibniz algebras to the category of Lie algebras, which assigns a Leibniz algebra to the Lie algebra given by the quotient of the Leibniz algebra by the ideal spanned by the square elements of the Leibniz algebra. This functor is left adjoint to the inclusion functor that considers a Lie algebra as a Leibniz algebra. This environment fits in the framework of central extensions and commutators in semi-abelian categories with respect to a Birkhoff subcategory, where classical or absolute notions are relative to the abelianization functor. Classical properties of Leibniz algebras (properties relative to the abelianization functor) were adapted to the relative setting (with respect to the Liezation functor); in general, absolute properties have the corresponding relative ones, but not all absolute properties immediately hold in the relative case, so new requirements are needed. Following this line of research, it was conducted an analysis of central derivations of Leibniz algebras relative to the Liezation functor, called as Lie-derivations, and a characterization of Lie-stem Leibniz algebras by their Lie-central derivations was obtained. In this paper, we present an overview of these results, and we analyze some new properties concerning Lie-central derivations and almost inner Lie-derivations. Namely, a Leibniz algebra is a vector space equipped with a bilinear bracket operation satisfying the Leibniz identity. We define the Lie-bracket by [x, y]lie = [x, y] + [y, x] , for all x, y . The Lie-center of a Leibniz algebra is the two-sided ideal of elements that annihilate all the elements in the Leibniz algebra through the Lie-bracket. A Lie-derivation is a linear map which acts as a derivative with respect to the Lie-bracket. Obviously, usual derivations are Lie-derivations, but the converse is not true in general. A Lie-derivation is called a Lie-central derivation if its image is contained in the Lie-center. A Lie-derivation is called an almost inner Lie-derivation if the image of an element x is contained in the Lie-commutator of x and the Leibniz algebra. The main results we present in this talk refer to the conditions under which Lie-central derivation and almost inner Lie-derivations coincide.

Keywords: almost inner Lie-derivation, Lie-center, Lie-central derivation, Lie-derivation

Procedia PDF Downloads 120
1577 Investigation of Time Pressure and Instinctive Reaction in Moral Dilemmas While Driving

Authors: Jacqueline Miller, Dongyuan Y. Wang, F. Dan Richard

Abstract:

Before trying to make an ethical machine that holds a higher ethical standard than humans, a better understanding of human moral standards that could be used as a guide is crucial. How humans make decisions in dangerous driving situations like moral dilemmas can contribute to developing acceptable ethical principles for autonomous vehicles (AVs). This study uses a driving simulator to investigate whether drivers make utilitarian choices (choices that maximize lives saved and minimize harm) in unavoidable automobile accidents (moral dilemmas) with time pressure manipulated. This study also investigates how impulsiveness influences drivers’ behavior in moral dilemmas. Manipulating time pressure results in collisions that occur at varying time intervals (4 s, 5 s, 7s). Manipulating time pressure helps investigate how time pressure may influence drivers’ response behavior. Thirty-one undergraduates participated in this study using a STISM driving simulator to respond to driving moral dilemmas. The results indicated that the percentage of utilitarian choices generally increased when given more time to respond (from 4 s to 7 s). Additionally, participants in vehicle scenarios preferred responding right over responding left. Impulsiveness did not influence utilitarian choices. However, as time pressure decreased, response time increased. Findings have potential implications and applications on the regulation of driver assistance technologies and AVs.

Keywords: time pressure, automobile moral dilemmas, impulsiveness, reaction time

Procedia PDF Downloads 38
1576 A Study of Some Selected Anthropometric and Physical Fitness Variables of Junior Free Style Wrestlers

Authors: Parwinder Singh, Ashok Kumar

Abstract:

Aim: The aim of the study was to investigate the relationship between selected Anthropometric and physical fitness variables of Junior Free Style Wrestlers. Method: one hundred fifty (N = 150) male Junior Free Style Wrestlers were selected as subjects, and they were categorized into five groups according to their weight categories; each group was comprised of 30 wrestlers. Body Mass Index can be considered according to the World Health Organization. Body fat percentage was assessed by using Durnin and Womersley equation, and Bodyweight was checked with a weighing machine. Cardiovascular endurance was checked by the Havard Step test of junior freestyle wrestlers. Results: A statistically positive significant correlation was found between Body Weight and Body Mass Index, skinfold thickness, and Percentage Body Fat. Fitness index was observed as negatively significant relationship related with Body Weight, Percent Body Fat, and Body Mass Index. Conclusion: It is concluded that freestyle wrestling is a weight classified sport and physical fitness is the most important factor in freestyle wrestling; therefore, the correlation of the fitness index of the wrestlers with body composition is important. The results of the present study also demonstrated the effect of Age, Body Height, Body Weight, Body Mass Index, and percentage body fat of the aerobic fitness of junior freestyle wrestlers.

Keywords: aerobic fitness, anthropometry, fat percentage, free style wrestling, skinfold, strength

Procedia PDF Downloads 181
1575 Effect of Built in Polarization on Thermal Properties of InGaN/GaN Heterostructures

Authors: Bijay Kumar Sahoo

Abstract:

An important feature of InₓGa₁-ₓN/GaN heterostructures is strong built-in polarization (BIP) electric field at the hetero-interface due to spontaneous (sp) and piezoelectric (pz) polarizations. The intensity of this electric field reaches several MV/cm. This field has profound impact on optical, electrical and thermal properties. In this work, the effect of BIP field on thermal conductivity of InₓGa₁-ₓN/GaN heterostructure has been investigated theoretically. The interaction between the elastic strain and built in electric field induces additional electric polarization. This additional polarization contributes to the elastic constant of InₓGa₁-ₓN alloy. This in turn modifies material parameters of InₓGa₁-ₓN. The BIP mechanism enhances elastic constant, phonon velocity and Debye temperature and their bowing constants in InₓGa₁-ₓN alloy. These enhanced thermal parameters increase phonon mean free path which boost thermal conduction process. The thermal conductivity (k) of InxGa1-xN alloy has been estimated for x=0, 0.1, 0.3 and 0.9. Computation finds that irrespective of In content, the room temperature k of InₓGa₁-ₓN/GaN heterostructure is enhanced by BIP mechanism. Our analysis shows that at a certain temperature both k with and without BIP show crossover. Below this temperature k with BIP field is lower than k without BIP; however, above this temperature k with BIP field is significantly contributed by BIP mechanism leading to k with BIP field become higher than k without BIP field. The crossover temperature is primary pyroelectric transition temperature. The pyroelectric transition temperature of InₓGa₁-ₓN alloy has been predicted for different x. This signature of pyroelectric nature suggests that thermal conductivity can reveal pyroelectricity in InₓGa₁-ₓN alloy. The composition dependent room temperature k for x=0.1 and 0.3 are in line with prior experimental studies. The result can be used to minimize the self-heating effect in InₓGa₁-ₓN/GaN heterostructures.

Keywords: built-in polarization, phonon relaxation time, thermal properties of InₓGa₁-ₓN /GaN heterostructure, self-heating

Procedia PDF Downloads 386
1574 Performance Enhancement of Hybrid Racing Car by Design Optimization

Authors: Tarang Varmora, Krupa Shah, Karan Patel

Abstract:

Environmental pollution and shortage of conventional fuel are the main concerns in the transportation sector. Most of the vehicles use an internal combustion engine (ICE), powered by gasoline fuels. This results into emission of toxic gases. Hybrid electric vehicle (HEV) powered by electric machine and ICE is capable of reducing emission of toxic gases and fuel consumption. However to build HEV, it is required to accommodate motor and batteries in the vehicle along with engine and fuel tank. Thus, overall weight of the vehicle increases. To improve the fuel economy and acceleration, the weight of the HEV can be minimized. In this paper, the design methodology to reduce the weight of the hybrid racing car is proposed. To this end, the chassis design is optimized. Further, attempt is made to obtain the maximum strength with minimum material weight. The best configuration out of the three main configurations such as series, parallel and the dual-mode (series-parallel) is chosen. Moreover, the most suitable type of motor, battery, braking system, steering system and suspension system are identified. The racing car is designed and analyzed in the simulating software. The safety of the vehicle is assured by performing static and dynamic analysis on the chassis frame. From the results, it is observed that, the weight of the racing car is reduced by 11 % without compromising on safety and cost. It is believed that the proposed design and specifications can be implemented practically for manufacturing hybrid racing car.

Keywords: design optimization, hybrid racing car, simulation, vehicle, weight reduction

Procedia PDF Downloads 277
1573 Human Resource Practices and Organization Knowledge Capability: An Exploratory Study Applied to Private Organization

Authors: Mamoona Rasheed, Salman Iqbal, Muhammad Abdullah

Abstract:

Organizational capability, in terms of employees’ knowledge is valuable, and difficult to reproduce; and help to build sustainable competitive advantages. Knowledge capability is linked with human resource (HR) practices of an organization. This paper investigates the relationship between HR practices, knowledge management and organization capability. In an organization, employees play key role for the effective organizational performance by sharing their knowledge with management and co-workers that contributes towards organization capability. Pakistan being a developing country has different HR practices and culture. The business opportunities give rise to the discussion about the effect of HR practices on knowledge management and organization capability as innovation performance. An empirical study is conducted through questionnaires form the employees in private banks of Lahore, Pakistan. The data is collected via structured questionnaire with a sample of 120 cases. Data is analyzed using Structure Equation Modeling (SEM), and results are depicted using AMOS software. Results of this study are tabulated, interpreted and crosschecked with other studies. Findings suggest that there is a positive relationship of training & development along with incentives on knowledge management. On the other hand, employee’s participation has insignificant association with knowledge management. In addition, knowledge management has also positive association with organization capability. In line with the previous research, it is suggested that knowledge management is important for improving the organizational capability such as innovation performance and knowledge capacity of firm. Organization capability may improve significantly once specific HR practices are properly established and implemented by HR managers. This Study has key implications for knowledge management and innovation fields theoretically and practically.

Keywords: employee participation, incentives, knowledge management, organization capability, training and development

Procedia PDF Downloads 144
1572 Investigating Teachers’ Approaches in Teaching English and Students’ Communicative Ability in a Tertiary College

Authors: Adel Ben Mohamed

Abstract:

The widespread use of the English language around the world has pushed many countries to consider such a language as a top priority in their educational system. One of these countries is the Sultanate of Oman. In this frame, the Omani government has allocated huge budgets as well as resources in order to implement the English language in its education system. The importance of English is prevalent in Oman. This is clearly noticeable through remarkable signs. For instance, most of the official documents in Oman are in both Arabic (the mother tongue) or English. In addition to that, there is a mushroom of English language institutes all over the country. In 2020, there are over fourteen English language institutes and centers in Oman (esl base, 2020). Moreover, these days most of the Omani parents are sending their children for tuition to learn the English language. Hence, it is apparent that the Sultanate of Oman is giving a great value to the importance of English in attaining various goals. However, in the world of work, what is more, important today is fluency rather than accuracy. Therefore, many people go for communication English rather than technical English. For example, Oman Daily Observer newspaper published a job advertisement of a sale assistant on 23rd of November 2020, recommended that speaking very well English is a must to be hired for the position (Oman Observer, 2020). In line with this and because of the great importance of the English language in Oman, the ministry of higher education has placed much emphasis on this official foreign language. Therefore, in the Omani educational system, all post -secondary students must sit for one year in one of the higher education institutions as a General Foundation Programmes (GFP) prior to moving to their respective majors in diploma level. Accordingly, the implementation of any teaching approach is determined by different factors: some are directly linked to teachers while others are related to organizational variables.

Keywords: teaching approaches, communicative, ability, investigating

Procedia PDF Downloads 73
1571 Comparative Outcomes of Percutaneous Coronary Intervention in Smokers versus Non Nonsmokers Patients: Observational Studies

Authors: Pratima Tatke, Archana Avhad, Bhanu Duggal, Meeta Rajivlochan, Sujata Saunik, Pradip Vyas, Nidhi Pandey, Aditee Dalvi, Jyothi Subramanian

Abstract:

Background: Smoking is well established risk factor for the development and progression of coronary artery disease. It is strongly related to morbidity and mortality from cardiovascular causes. The aim of this study is to observe effect of smoking status on percutaneous coronary intervention(PCI) after 1 year. Methods: 2527 patients who underwent PCI at different hospital of Maharashtra(India) from 2012 to 2015 under the health insurance scheme which is launched by Health department, Government of Maharashtra for below poverty line(BPL) families which covers cardiology. Informed consent of patients was taken .They were followed by telephonic survey after 6months to 1year of PCI . Outcomes of interest included myocardial infarction, restenosis, cardiac rehospitalization, death, and a composite of events after PCI. Made group of two non smokers-1861 and smokers (including patients who quit at time of PCI )-659. Results: Statistical Analysis using Pearson’s chi square test revealed that there was trend seen of increasing incidence of death, Myocardial infarction and Restenosis in smokers than non smokers .Smokers had a greater death risk compared to nonsmoker; 5.7% and 5.1% respectively p=0.518. Also Repeat procedures (2.1% vs. 1.5% p=0.222), breathlessness (17.8% vs. 18.20% p=0.1) and Myocardial Infarction (7.3% vs. 10%) high in smoker than non smokers. Conclusion: Major adverse cardiovascular events (MACE) were observed even after successful PCI in smokers. Patients undergoing percutaneous coronary intervention should be encouraged to stop smoking.

Keywords: coronary artery diseases, major adverse cardiovascular events, percutaneous coronary intervention, smoking

Procedia PDF Downloads 187
1570 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 409
1569 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 125
1568 Using Autoencoder as Feature Extractor for Malware Detection

Authors: Umm-E-Hani, Faiza Babar, Hanif Durad

Abstract:

Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.

Keywords: malware, auto encoders, automated feature engineering, classification

Procedia PDF Downloads 58
1567 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis

Abstract:

This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.

Keywords: DC-DC converter, hysteresis modulation, parallel multi-cells converter, pulse-width modulation, robustness, sliding mode control

Procedia PDF Downloads 152
1566 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading

Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool

Abstract:

The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.

Keywords: shallow foundation, seismic behavior, raft thickness, damping ratio

Procedia PDF Downloads 137
1565 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests

Authors: Julius Onyancha, Valentina Plekhanova

Abstract:

One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.

Keywords: web log data, web user profile, user interest, noise web data learning, machine learning

Procedia PDF Downloads 246
1564 Patterns of Malignant and Benign Breast Lesions in Hail Region: A Retrospective Study at King Khalid Hospital

Authors: Laila Seada, Ashraf Ibrahim, Amjad Al Shammari

Abstract:

Background and Objectives: Breast carcinoma is the most common cancer of females in Hail region, accounting for 31% of all diagnosed cancer cases followed by thyroid carcinoma (25%) and colorectal carcinoma (13%). Methods: In the present retrospective study, all cases of breast lesions received at the histopathology department in King Khalid Hospital, Hail, during the period from May 2011 to April 2016 have been retrieved from department files. For all cases, a trucut biopsy, lumpectomy, or modified radical mastectomy was available for histopathologic diagnosis, while 105/140 (75%) had, as well, preoperative fine needle aspirates (FNA). Results: 49 cases out of 140 (35%) breast lesions were carcinomas: 44/49 (89.75%) was invasive ductal, 2/49(4.1%) invasive lobular carcinomas, 1/49(2.05%) intracystic low grade papillary carcinoma and 2/49 (4.1%) ductal carcinoma in situ (DCIS). Mean age for malignant cases was 45.06 (+/-10.58): 32.6% were below the age of 40 and 30.6 below 50 years, 18.3% below 60 and 16.3% below 70 years. For the benign group, mean age was 32.52 (+/10.5) years. Benign lesions were in order of frequency: 34 fibroadenomas, 14 fibrocystic disease, 12 chronic mastitis, five granulomatous mastitis, three intraductal papillomas, and three benign phyllodes tumor. Tubular adenoma, lipoma, skin nevus, pilomatrixoma, and breast reduction specimens constituted the remaining specimens. Conclusion: Breast lesions are common in our series and invasive carcinoma accounts for more than 1/3rd of the lumps, with 63.2% incidence in pre-menopausal ladies, below the age of 50 years. FNA as a non-invasive procedure, proved to be an effective tool in diagnosing both benign and malignant/suspicious breast lumps and should continue to be used as a first assessment line of palpable breast masses.

Keywords: age incidence, breast carcinoma, fine needle aspiration, hail region

Procedia PDF Downloads 254
1563 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia

Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden

Abstract:

The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.

Keywords: decarbonization, energy system modelling, renewable energy, sector coupling

Procedia PDF Downloads 121
1562 Localization of Pyrolysis and Burning of Ground Forest Fires

Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov

Abstract:

This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.

Keywords: forest fire, barrier water lines, pyrolysis front, flame front

Procedia PDF Downloads 114