Search results for: computation independent model (CIM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18484

Search results for: computation independent model (CIM)

18184 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 125
18183 A Unification and Relativistic Correction for Boltzmann’s Law

Authors: Lloyd G. Allred

Abstract:

The distribution of velocities of particles in plasma is a well understood discipline of plasma physics. Boltzmann’s law and the Maxwell-Boltzmann distribution describe the distribution of velocity of a particle in plasma as a function of mass and temperature. Particles with the same mass tend to have the same velocity. By expressing the same law in terms of energy alone, the author obtains a distribution independent of mass. In summary, for particles in plasma, the energies tend to equalize, independent of the masses of the individual particles. For high-energy plasma, the original law predicts velocities greater than the speed of light. If one uses Einstein’s formula for energy (E=mc2), then a relativistic correction is not required.

Keywords: cosmology, EMP, plasma physics, relativity

Procedia PDF Downloads 207
18182 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 94
18181 Delay-Independent Closed-Loop Stabilization of Neutral System with Infinite Delays

Authors: Iyai Davies, Olivier L. C. Haas

Abstract:

In this paper, the problem of stability and stabilization for neutral delay-differential systems with infinite delay is investigated. Using Lyapunov method, new delay-independent sufficient condition for the stability of neutral systems with infinite delay is obtained in terms of linear matrix inequality (LMI). Memory-less state feedback controllers are then designed for the stabilization of the system using the feasible solution of the resulting LMI, which are easily solved using any optimization algorithms. Numerical examples are given to illustrate the results of the proposed methods.

Keywords: infinite delays, Lyapunov method, linear matrix inequality, neutral systems, stability

Procedia PDF Downloads 418
18180 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas

Authors: Michel Soto Chalhoub

Abstract:

Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.

Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra

Procedia PDF Downloads 213
18179 Innovative Approaches to Formal Education: Effect of Online Cooperative Learning Embedded Blended Learning on Student's Academic Achievement and Attitude

Authors: Mohsin Javed

Abstract:

School Education department is usually criticized for utilizing quite low or fewer academic days due to many reasons like extreme weather conditions, sudden holidays, summer vocations, pandemics and, terrorism etc. The purpose of the experimental study was to determine the efficacy of online cooperative learning (OCL) integrated in the rotation model of blended learning. The effects on academic achievement of students and students' attitude about OCL embedded learning were assessed. By using a posttest only control group design, sixty-two first-year students were randomly allocated to either the experimental (30) or control (32) group. The control group received face to face classes for six sessions per week, while the experimental group had three OCL and three formal sessions per week under rotation model. Students' perceptions of OCL were evaluated using a survey questionnaire. Data was analyzed by independent sample t test and one sample t test. According to findings, the intervention greatly improved the state of the dependent variables. The results demonstrate that OCL can be successfully implemented in formal education using a blended learning rotation approach. Higher secondary institutions are advised to use this model in situations like Covid 19, smog, unexpected holidays, instructor absence from class due to increased responsibilities, and summer vacations.

Keywords: blended learning, online cooperative learning, rotation model of blended learning, supplementing

Procedia PDF Downloads 52
18178 Cross Matching: An Improved Method to Obtain Comprehensive and Consolidated Evidence

Authors: Tuula Heinonen, Wilhelm Gaus

Abstract:

At present safety, assessment starts with animal tests although their predictivity is often poor. Even after extended human use experimental data are often judged as the core information for risk assessment. However, the best opportunity to generate true evidence is to match all available information. Cross matching methodology combines the different fields of knowledge and types of data (e.g. in-vitro and in-vivo experiments, clinical observations, clinical and epidemiological studies, and daily life observations) and gives adequate weight to individual findings. To achieve a consolidated outcome, the information from all available sources is analysed and compared with each other. If single pieces of information fit together a clear picture becomes visible. If pieces of information are inconsistent or contradictory careful consideration is necessary. 'Cross' can be understood as 'orthographic' in geometry or as 'independent' in mathematics. Results coming from different sources bring independent and; therefore, they result in new information. Independent information gives a larger contribution to evidence than results coming repeatedly from the same source. A successful example of cross matching is the assessment of Ginkgo biloba where we were able to come to the conclusive result: Ginkgo biloba leave extract is well tolerated and safe for humans.

Keywords: cross-matching, human use, safety assessment, Ginkgo biloba leave extract

Procedia PDF Downloads 273
18177 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures

Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara

Abstract:

The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.

Keywords: IoT, fog computing, task offloading, efficient crow search algorithm

Procedia PDF Downloads 38
18176 Board Composition and Performance of Listed Deposit Money Banks in Nigeria

Authors: Mary David, Denis Basila

Abstract:

This study assessed the Impact of Board Composition on the Performance of Listed Deposit Money Banks in Nigeria. A sample of ten (10) deposit money banks formed the sample of this study. Board size, gender diversity, and board independence were used as the independent variables, and firm size as a control variable, whiles the bank performance was proxy with Tobin’s Q (TQ) as the dependent variable. Secondary data was collected from secondary source through the annual report and account of the banks and was analyzed through the support of STATA 14 versions. Descriptive statistics, correlation matrix, and OLS multiple regression model were adopted for the study. Breusch and pagan lagrangian multiplier test for random effect was conducted. The findings of the study reveal that board size has positive and significant impact on Tobin’s Q, gender diversity has positive and significant impact on Tobin’s Q, while board independent had a negative and nonsignificant influence on the Tobin’s Q, Similarly, firm size was found to have a negative and nonsignificant impact on Tobin’s Q of the study banks. This study recommended that policy makers, stakeholders, and corporate managers of deposit money banks of Nigeria and related industries are encouraged to adopt board sizes and gender diversity that impact positively on bank performance.

Keywords: board composition, performance, deposit money banks, nigeria

Procedia PDF Downloads 55
18175 Real-Time Fitness Monitoring with MediaPipe

Authors: Chandra Prayaga, Lakshmi Prayaga, Aaron Wade, Kyle Rank, Gopi Shankar Mallu, Sri Satya, Harsha Pola

Abstract:

In today's tech-driven world, where connectivity shapes our daily lives, maintaining physical and emotional health is crucial. Athletic trainers play a vital role in optimizing athletes' performance and preventing injuries. However, a shortage of trainers impacts the quality of care. This study introduces a vision-based exercise monitoring system leveraging Google's MediaPipe library for precise tracking of bicep curl exercises and simultaneous posture monitoring. We propose a three-stage methodology: landmark detection, side detection, and angle computation. Our system calculates angles at the elbow, wrist, neck, and torso to assess exercise form. Experimental results demonstrate the system's effectiveness in distinguishing between good and partial repetitions and evaluating body posture during exercises, providing real-time feedback for precise fitness monitoring.

Keywords: physical health, athletic trainers, fitness monitoring, technology driven solutions, Google’s MediaPipe, landmark detection, angle computation, real-time feedback

Procedia PDF Downloads 52
18174 Internet Purchases in European Union Countries: Multiple Linear Regression Approach

Authors: Ksenija Dumičić, Anita Čeh Časni, Irena Palić

Abstract:

This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analysed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analysed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.

Keywords: European union, Internet purchases, multiple linear regression model, outlier

Procedia PDF Downloads 292
18173 Implementation and Validation of a Damage-Friction Constitutive Model for Concrete

Authors: L. Madouni, M. Ould Ouali, N. E. Hannachi

Abstract:

Two constitutive models for concrete are available in ABAQUS/Explicit, the Brittle Cracking Model and the Concrete Damaged Plasticity Model, and their suitability and limitations are well known. The aim of the present paper is to implement a damage-friction concrete constitutive model and to evaluate the performance of this model by comparing the predicted response with experimental data. The constitutive formulation of this material model is reviewed. In order to have consistent results, the parameter identification and calibration for the model have been performed. Several numerical simulations are presented in this paper, whose results allow for validating the capability of the proposed model for reproducing the typical nonlinear performances of concrete structures under different monotonic and cyclic load conditions. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated model.

Keywords: Abaqus, concrete, constitutive model, numerical simulation

Procedia PDF Downloads 350
18172 Numerical Model to Study Calcium and Inositol 1,4,5-Trisphosphate Dynamics in a Myocyte Cell

Authors: Nisha Singh, Neeru Adlakha

Abstract:

Calcium signalling is one of the most important intracellular signalling mechanisms. A lot of approaches and investigators have been made in the study of calcium signalling in various cells to understand its mechanisms over recent decades. However, most of existing investigators have mainly focussed on the study of calcium signalling in various cells without paying attention to the dependence of calcium signalling on other chemical ions like inositol-1; 4; 5 triphosphate ions, etc. Some models for the independent study of calcium signalling and inositol-1; 4; 5 triphosphate signalling in various cells are present but very little attention has been paid by the researchers to study the interdependence of these two signalling processes in a cell. In this paper, we propose a coupled mathematical model to understand the interdependence of inositol-1; 4; 5 triphosphate dynamics and calcium dynamics in a myocyte cell. Such studies will provide the deeper understanding of various factors involved in calcium signalling in myocytes, which may be of great use to biomedical scientists for various medical applications.

Keywords: calcium signalling, coupling, finite difference method, inositol 1, 4, 5-triphosphate

Procedia PDF Downloads 279
18171 Factors Influencing the Use of Mobile Phone by Smallholder Farmers in Vegetable Marketing in Fogera District

Authors: Molla Tadesse Lakew

Abstract:

This study was intended to identify the factors influencing the use of mobile phones in vegetable marketing in Fogera district. The use of mobile phones in vegetable marketing and factors influencing mobile phone use were specific objectives of the study. Three kebeles from the Fogera district were selected purposively based on their vegetable production potential. A simple random sampling technique (lottery method) was used to select 153 vegetable producer farmers. Interview schedule and key informants interviews were used to collect primary data. For analyzing the data, descriptive statistics like frequency and percentage, two independent t-tests, and chi-square were used. Furthermore, econometric analysis (binary logistic model) was used to assess the factors influencing mobile phone use for vegetable market information. Contingency coefficient and variance inflation factor were used to check multicollinearity problems between the independent variables. Of 153 respondents, 82 (61.72%) were mobile phone users, while 71 (38.28 %) were mobile phone nonusers. Moreover, the main use of mobile phones in vegetable marketing includes communicating at a distance to save time and minimizing transport costs, getting vegetable marketing price information, identifying markets and buyers to sell the vegetable, deciding when to sell the vegetable, negotiating with buyers for better vegetable prices and for searching of the fast market to avoid from losing of product through perishing. The model result indicated that the level of education, size of land, income, access to credit, and age were significant variables affecting the use of mobile phones in vegetable marketing. It could be recommended to encourage adult education or give training for farmers on how to operate mobile phones and create awareness for the elderly rural farmers as they are able to use the mobile phone for their vegetable marketing. Moreover, farmers should be aware that mobile phones are very important for those who own very small land to get maximum returns from their production. Lastly, providing access to credit and improving and diversifying income sources for the farmers to have mobile phones were recommended to improve the livelihood of farmers.

Keywords: mobile phone, farmers, vegetable marketing, Fogera District

Procedia PDF Downloads 56
18170 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 163
18169 Aerodynamic Prediction and Performance Analysis for Mars Science Laboratory Entry Vehicle

Authors: Tang Wei, Yang Xiaofeng, Gui Yewei, Du Yanxia

Abstract:

Complex lifting entry was selected for precise landing performance during the Mars Science Laboratory entry. This study aims to develop the three-dimensional numerical method for precise computation and the surface panel method for rapid engineering prediction. Detailed flow field analysis for Mars exploration mission was performed by carrying on a series of fully three-dimensional Navier-Stokes computations. The static aerodynamic performance was then discussed, including the surface pressure, lift and drag coefficient, lift-to-drag ratio with the numerical and engineering method. Computation results shown that the shock layer is thin because of lower effective specific heat ratio, and that calculated results from both methods agree well with each other, and is consistent with the reference data. Aerodynamic performance analysis shows that CG location determines trim characteristics and pitch stability, and certain radially and axially shift of the CG location can alter the capsule lifting entry performance, which is of vital significance for the aerodynamic configuration des0ign and inner instrument layout of the Mars entry capsule.

Keywords: Mars entry capsule, static aerodynamics, computational fluid dynamics, hypersonic

Procedia PDF Downloads 289
18168 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 48
18167 Preventive Behaviors of Exposure to ‎Secondhand Smoke among Women: A Study Based on the Health Belief Model

Authors: Arezoo Fallahi

Abstract:

Introduction: Exposure to second-hand smoke is an important global health problem and threatens the health of people, especially children and women. The aim of this study was to determine the effect of education based on the Health Belief Model on preventive behaviors of exposure to secondhand smoke in women. Materials and Methods: This experimental study was performed in 2023in Sanandaj, west of Iran. Seventy-four people were selected by simple random sampling and divided into an intervention group (37 people) and a control group (37 people). Data collection tools included demographic characteristics and a second-hand smoke exposure questionnaire based on the Health Beliefs Model. The training in the intervention group was conducted in three one-hour sessions in the comprehensive health service centers in the form of lectures, pamphlets, and group discussions. Data were analyzed using SPSS software version 21 and statistical tests such as correlation, paired t-test, and independent t-test. Results: The intervention and control groups were homogeneous before education. They were similar in terms of mean scores of the Health Belief Model. However, after an educational intervention, some of the scores increased, including the mean perceived sensitivity score (from 17.62±2.86 to 19.75±1.23), perceived severity score (28.40±4.45 to 31.64±2), perceived benefits score (27.27±4.89 to 31.94±2.17), practice score (32.64±4.68 to 36.91±2.32) perceived barriers from 26.62±5.16 to 31.29±3.34, guide for external action (from 17.70±3.99 to 22/89 ±1.67), guide for internal action from (16.59±2.95 to 1.03±18.75), and self-efficacy (from 19.83 ±3.99 to 23.37±1.43) (P <0.05). Conclusion: The educational intervention designed based on the Health Belief Model in women was effective in performing preventive behaviors against exposure to secondhand smoke.

Keywords: women, health behaviour, smoke, belive

Procedia PDF Downloads 39
18166 Provision Electronic Management Requirements in Libyan Oil Companies

Authors: Hitham Yami

Abstract:

This study will focus primarily on assessing the availability requirements of the electronic management of oil companies in Libya, and the mean objectives of the research applying electronic management and make recommendations and steps to approach electronic management. There are limited research and statistical analysis to support electronic management in Libyan companies. The groundwork for the proposed approach is to develop independent variables and the dependent variables to be restructured after it Alntra side of the field and the side to get the data to achieve the desired results and solving the problem faced by the Libyan Oil Corporation. All these strategies are proposed to achieve the goal, and solving Libyan oil installations.

Keywords: oil company’s revenue, independent variables, electronic management, Libyan oil corporation

Procedia PDF Downloads 250
18165 Functional and Efficient Query Interpreters: Principle, Application and Performances’ Comparison

Authors: Laurent Thiry, Michel Hassenforder

Abstract:

This paper presents a general approach to implement efficient queries’ interpreters in a functional programming language. Indeed, most of the standard tools actually available use an imperative and/or object-oriented language for the implementation (e.g. Java for Jena-Fuseki) but other paradigms are possible with, maybe, better performances. To proceed, the paper first explains how to model data structures and queries in a functional point of view. Then, it proposes a general methodology to get performances (i.e. number of computation steps to answer a query) then it explains how to integrate some optimization techniques (short-cut fusion and, more important, data transformations). It then compares the functional server proposed to a standard tool (Fuseki) demonstrating that the first one can be twice to ten times faster to answer queries.

Keywords: data transformation, functional programming, information server, optimization

Procedia PDF Downloads 146
18164 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 444
18163 Design of Collaborative Web System: Based on Case Study of PBL Support Systems

Authors: Kawai Nobuaki

Abstract:

This paper describes the design and implementation of web system for continuable and viable collaboration. This study proposes the improvement of the system based on a result of a certain practice. As contemporary higher education information environments transform, this study highlights the significance of university identity and college identity that are formed continuously through independent activities of the students. Based on these discussions, the present study proposes a practical media environment design which facilitates the processes of organizational identity formation based on a continuous and cyclical model. Even if users change by this system, the communication system continues operation and cooperation. The activity becomes the archive and produces new activity. Based on the result, this study elaborates a plan with a re-design by a system from the viewpoint of second-order cybernetics. Systems theory is a theoretical foundation for our study.

Keywords: collaborative work, learning management system, second-order cybernetics, systems theory, user generated contents, viable system model

Procedia PDF Downloads 203
18162 A Neural Network for the Prediction of Contraction after Burn Injuries

Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen

Abstract:

A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.

Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound

Procedia PDF Downloads 46
18161 Generalized Uncertainty Principle Modified Hawking Radiation in Bumblebee Gravity

Authors: Sara Kanzi, Izzet Sakalli

Abstract:

The effect of Lorentz symmetry breaking (LSB) on the Hawking radiation of Schwarzschild-like black hole found in the bumblebee gravity model (SBHBGM) is studied in the framework of quantum gravity. To this end, we consider Hawking radiation spin-0 (bosons) and spin-12particles (fermions), which go in and out through the event horizon of the SBHBGM. We use the modified Klein-Gordon and Dirac equations, which are obtained from the generalized uncertainty principle (GUP) to show how Hawking radiation is affected by the GUP and LSB. In particular, we reveal that independent of the spin of the emitted particles, GUP causes a change in the Hawking temperature of the SBHBGM. Furthermore, we compute the semi-analytic greybody factors (for both bosons and fermions) of the SBHBGM. Thus, we reveal that LSB is effective on the greybody factor of the SBHBGM such that its redundancy decreases the value of the greybody factor. Our findings are graphically depicted.

Keywords: bumblebee gravity model, Hawking radiation, generalized uncertainty principle, Lorentz symmetry breaking

Procedia PDF Downloads 126
18160 Carbon Nanotube Field Effect Transistor - a Review

Authors: P. Geetha, R. S. D. Wahida Banu

Abstract:

The crowning advances in Silicon based electronic technology have dominated the computation world for the past decades. The captivating performance of Si devices lies in sustainable scaling down of the physical dimensions, by that increasing device density and improved performance. But, the fundamental limitations due to physical, technological, economical, and manufacture features restrict further miniaturization of Si based devices. The pit falls are due to scaling down of the devices such as process variation, short channel effects, high leakage currents, and reliability concerns. To fix the above-said problems, it is needed either to follow a new concept that will manage the current hitches or to support the available concept with different materials. The new concept is to design spintronics, quantum computation or two terminal molecular devices. Otherwise, presently used well known three terminal devices can be modified with different materials that suits to address the scaling down difficulties. The first approach will occupy in the far future since it needs considerable effort; the second path is a bright light towards the travel. Modelling paves way to know not only the current-voltage characteristics but also the performance of new devices. So, it is desirable to model a new device of suitable gate control and project the its abilities towards capability of handling high current, high power, high frequency, short delay, and high velocity with excellent electronic and optical properties. Carbon nanotube became a thriving material to replace silicon in nano devices. A well-planned optimized utilization of the carbon material leads to many more advantages. The unique nature of this organic material allows the recent developments in almost all fields of applications from an automobile industry to medical science, especially in electronics field-on which the automation industry depends. More research works were being done in this area. This paper reviews the carbon nanotube field effect transistor with various gate configurations, number of channel element, CNT wall configurations and different modelling techniques.

Keywords: array of channels, carbon nanotube field effect transistor, double gate transistor, gate wrap around transistor, modelling, multi-walled CNT, single-walled CNT

Procedia PDF Downloads 306
18159 Coding and Decoding versus Space Diversity for ‎Rayleigh Fading Radio Frequency Channels ‎

Authors: Ahmed Mahmoud Ahmed Abouelmagd

Abstract:

The diversity is the usual remedy of the transmitted signal level variations (Fading phenomena) in radio frequency channels. Diversity techniques utilize two or more copies of a signal and combine those signals to combat fading. The basic concept of diversity is to transmit the signal via several independent diversity branches to get independent signal replicas via time – frequency - space - and polarization diversity domains. Coding and decoding processes can be an alternative remedy for fading phenomena, it cannot increase the channel capacity, but it can improve the error performance. In this paper we propose the use of replication decoding with BCH code class, and Viterbi decoding algorithm with convolution coding; as examples of coding and decoding processes. The results are compared to those obtained from two optimized selection space diversity techniques. The performance of Rayleigh fading channel, as the model considered for radio frequency channels, is evaluated for each case. The evaluation results show that the coding and decoding approaches, especially the BCH coding approach with replication decoding scheme, give better performance compared to that of selection space diversity optimization approaches. Also, an approach for combining the coding and decoding diversity as well as the space diversity is considered, the main disadvantage of this approach is its complexity but it yields good performance results.

Keywords: Rayleigh fading, diversity, BCH codes, Replication decoding, ‎convolution coding, viterbi decoding, space diversity

Procedia PDF Downloads 427
18158 A Study on Game Theory Approaches for Wireless Sensor Networks

Authors: M. Shoukath Ali, Rajendra Prasad Singh

Abstract:

Game Theory approaches and their application in improving the performance of Wireless Sensor Networks (WSNs) are discussed in this paper. The mathematical modeling and analysis of WSNs may have low success rate due to the complexity of topology, modeling, link quality, etc. However, Game Theory is a field, which can efficiently use to analyze the WSNs. Game Theory is related to applied mathematics that describes and analyzes interactive decision situations. Game theory has the ability to model independent, individual decision makers whose actions affect the surrounding decision makers. The outcome of complex interactions among rational entities can be predicted by a set of analytical tools. However, the rationality demands a stringent observance to a strategy based on measured of perceived results. Researchers are adopting game theory approaches to model and analyze leading wireless communication networking issues, which includes QoS, power control, resource sharing, etc.

Keywords: wireless sensor network, game theory, cooperative game theory, non-cooperative game theory

Procedia PDF Downloads 414
18157 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 445
18156 The Influence of the Concentration and Temperature on the Rheological Behavior of Carbonyl-Methylcellulose

Authors: Mohamed Rabhi, Kouider Halim Benrahou

Abstract:

The rheological properties of the carbonyl-methylcellulose (CMC), of different concentrations (25000, 50000, 60000, 80000 and 100000 ppm) and different temperatures were studied. We found that the rheological behavior of all CMC solutions presents a pseudo-plastic behavior, it follows the model of Ostwald-de Waele. The objective of this work is the modeling of flow by the CMC Cross model. The Cross model gives us the variation of the viscosity according to the shear rate. This model allowed us to adjust more clearly the rheological characteristics of CMC solutions. A comparison between the Cross model and the model of Ostwald was made. Cross the model fitting parameters were determined by a numerical simulation to make an approach between the experimental curve and those given by the two models. Our study has shown that the model of Cross, describes well the flow of "CMC" for low concentrations.

Keywords: CMC, rheological modeling, Ostwald model, cross model, viscosity

Procedia PDF Downloads 381
18155 3D Model of Rain-Wind Induced Vibration of Inclined Cable

Authors: Viet-Hung Truong, Seung-Eock Kim

Abstract:

Rain–wind induced vibration of inclined cable is a special aerodynamic phenomenon because it is easily influenced by many factors, especially the distribution of rivulet and wind velocity. This paper proposes a new 3D model of inclined cable, based on single degree-of-freedom model. Aerodynamic forces are firstly established and verified with the existing results from a 2D model. The 3D model of inclined cable is developed. The 3D model is then applied to assess the effects of wind velocity distribution and the continuity of rivulets on the cable. Finally, an inclined cable model with small sag is investigated.

Keywords: 3D model, rain - wind induced vibration, rivulet, analytical model

Procedia PDF Downloads 480