Search results for: analytic functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2902

Search results for: analytic functions

2482 Artificial Intelligent Methodology for Liquid Propellant Engine Design Optimization

Authors: Hassan Naseh, Javad Roozgard

Abstract:

This paper represents the methodology based on Artificial Intelligent (AI) applied to Liquid Propellant Engine (LPE) optimization. The AI methodology utilized from Adaptive neural Fuzzy Inference System (ANFIS). In this methodology, the optimum objective function means to achieve maximum performance (specific impulse). The independent design variables in ANFIS modeling are combustion chamber pressure and temperature and oxidizer to fuel ratio and output of this modeling are specific impulse that can be applied with other objective functions in LPE design optimization. To this end, the LPE’s parameter has been modeled in ANFIS methodology based on generating fuzzy inference system structure by using grid partitioning, subtractive clustering and Fuzzy C-Means (FCM) clustering for both inferences (Mamdani and Sugeno) and various types of membership functions. The final comparing optimization results shown accuracy and processing run time of the Gaussian ANFIS Methodology between all methods.

Keywords: ANFIS methodology, artificial intelligent, liquid propellant engine, optimization

Procedia PDF Downloads 587
2481 Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany

Authors: Yun-Pang Flötteröd, Jakob Erdmann

Abstract:

The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.

Keywords: evacuation, microscopic traffic simulation, rerouting, SUMO

Procedia PDF Downloads 194
2480 Analysis of Particle Reinforced Metal Matrix Composite Crankshaft

Authors: R. S. Vikaash, S. Vinodh, T. S. Sai Prashanth

Abstract:

Six sigma is a defect reduction strategy enabling modern organizations to achieve business prosperity. The practitioners are in need to select best six sigma project among the available alternatives to achieve customer satisfaction. In this circumstance, this article presents a study in which six sigma project selection is formulated as Multi-Criteria Decision-Making(MCDM) problem and the best project has been found using AHP. Five main governing criteria and 14 sub criteria are being formulated. The decision maker’s inputs were gathered and computations were performed. The project with the high values from the set of projects is selected as the best project. Based on calculations, Project “P1”is found to be the best and further deployment actions have been undertaken in the organization.

Keywords: six Sigma, project selection, MCDM, analytic hierarchy process, business prosperity

Procedia PDF Downloads 342
2479 Pragmatic Discoursal Study of Hedging Constructions in English Language

Authors: Mohammed Hussein Ahmed, Bahar Mohammed Kareem

Abstract:

This study is concerned with the pragmatic discoursal study of hedging constructions in English language. Hedging is a mitigated word used to lessen the impact of the utterance uttered by the speakers. Hedging could be either adverbs, adjectives, verbs and sometimes it may consist of clauses. It aims at finding out the extent to which speakers and participants of the discourse use hedging constructions during their conversations. The study also aims at finding out whether or not there are any significant differences in the types and functions of the frequency of hedging constructions employed by male and female. It is hypothesized that hedging constructions are frequent in English discourse more than any other languages due to its formality and that the frequency of the types and functions are influenced by the gender of the participants. To achieve the aims of the study, two types of procedures have been followed: theoretical and practical. The theoretical procedure consists of presenting a theoretical background of hedging topic which includes its definitions, etymology and theories. The practical procedure consists of selecting a sample of texts and analyzing them according to an adopted model. A number of conclusions will be drawn based on the findings of the study.

Keywords: hedging, pragmatics, politeness, theoretical

Procedia PDF Downloads 587
2478 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 168
2477 Effects of Oral L-Carnitine on Liver Functions after Trans arterial Chemoembolization in Hepatocellular Carcinoma Patients

Authors: Ali Kassem, Aly Taha, Abeer Hassan, Kazuhide Higuchi

Abstract:

Introduction: Trans arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) is usually followed by hepatic dysfunction that limits its efficacy. L-carnitine is recently studied as hepatoprotective agent. Our aim is to evaluate the L-carnitine effects against the deterioration of liver functions after TACE. Method: 53 patients with intermediate stage HCC were assigned into two groups; L-carnitine group (26 patients) who received L-carnitine 300 mg tablet twice daily from 2 weeks before to 12 weeks after TACE and control group (27 patients) without L-carnitine therapy. 28 of studied patients received branched chain amino acids granules. Results: There were significant differences between L-carnitine Vs. control group in mean serum albumin change from baseline to 1 week and 4 weeks after TACE (p < 0.05). L-Carnitine maintained Child-Pugh score at 1 week after TACE and exhibited improvement at 4 weeks after TACE (p < 0.01 Vs 1 week after TACE). Control group has significant Child-Pugh score deterioration from baseline to 1 week after TACE (p < 0.05) and 12 weeks after TACE (p < 0.05). There were significant differences between L-carnitine and control groups in mean Child-Pugh score change from baseline to 4 weeks (p < 0.05) and 12 weeks after TACE (p < 0.05). L-carnitine displayed improvement in (PT) from baseline to 1 week, 4 w (p < 0.05) and 12 weeks after TACE. PT in control group declined less than baseline along all follow up intervals. Total bilirubin in L-carnitine group decreased at 1 week post TACE while in control group, it significantly increased at 1 week (p = 0.01). ALT and C-reactive protein elevation were suppressed at 1 week after TACE in Lcarnitine group. The hepatoprotective effects of L-carnitine were enhanced by concomitant use of branched chain amino acids. Conclusion: L-carnitine and BCAA combination therapy offer a novel supportive strategy after TACE in HCC patients.

Keywords: hepatocellular carcinoma, L-carnitine, liver functions , trans-arterial embolization

Procedia PDF Downloads 155
2476 The Station and Value of Beauty in Islam Based on the Holy Quran

Authors: Hamidreza Qaderi

Abstract:

Beauty is a part of our life and we as Muslims cannot ignore it. Furthermore, Islam did not ignore. God in Quran has used words that mean beauty many times. Zain «زین» and its synonyms are some of that words that are used 46 times in a different meaning of beauty. Some of them are mentioned to worldly beauty and not acceptable beauty and other of them are mentioned to the Moral beauty. In this article, the meaning of Zain 'beauty' in Surah Al Aaraf (The Heights) is explained and described. In fact, there are specific signs about beauty in the 31 and 32 verses of this Surah in which the station of beauty can determine. For clarification of this issue, the analytic philosophy method is used to express the relation between this word and aesthetics and beauty in this article. The results of this research show that the beauty is an important issue in Islam as much as God order to Muslims to be beautiful when they want to pray.

Keywords: beauty, Quran, al zinah, Zain

Procedia PDF Downloads 255
2475 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 727
2474 Response Solutions of 2-Dimensional Elliptic Degenerate Quasi-Periodic Systems With Small Parameters

Authors: Song Ni, Junxiang Xu

Abstract:

This paper concerns quasi-periodic perturbations with parameters of 2-dimensional degenerate systems. If the equilibrium point of the unperturbed system is elliptic-type degenerate. Assume that the perturbation is real analytic quasi-periodic with diophantine frequency. Without imposing any assumption on the perturbation, we can use a path of equilibrium points to tackle with the Melnikov non-resonance condition, then by the Leray-Schauder Continuation Theorem and the Kolmogorov-Arnold-Moser technique, it is proved that the equation has a small response solution for many sufficiently small parameters.

Keywords: quasi-periodic systems, KAM-iteration, degenerate equilibrium point, response solution

Procedia PDF Downloads 86
2473 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 95
2472 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 161
2471 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
2470 Risks of Investment in the Development of Its Personnel

Authors: Oksana Domkina

Abstract:

According to the modern economic theory, human capital became one of the main production factors and the most promising direction of investment, as such investment provides opportunity of obtaining high and long-term economic and social effects. Informational technology (IT) sector is the representative of this new economy which is most dependent on human capital as the main competitive factor. So the question for this sector is not whether investment in development of personal should be made, but what are the most effective ways of executing it and who has to pay for the education: Worker, company or government. In this paper we examine the IT sector, describe the labor market of IT workers and its development, and analyze the risks that IT companies may face if they invest in the development of their workers and what factors influence it. The main problem and difficulty of quantitative estimation of risk of investment in human capital of a company and its forecasting is human factor. Human behavior is often unpredictable and complex, so it requires specific approaches and methods of assessment. To build a comprehensive method of estimation of the risk of investment in human capital of a company considering human factor, we decided to use the method of analytic hierarchy process (AHP), that initially was created and developed. We separated three main group of factors: Risks related to the worker, related to the company, and external factors. To receive data for our research, we conducted a survey among the HR departments of Ukrainian IT companies used them as experts for the AHP method. Received results showed that IT companies mostly invest in the development of their workers, although several hire only already qualified personnel. According to the results, the most significant risks are the risk of ineffective training and the risk of non-investment that are both related to the firm. The analysis of risk factors related to the employee showed that, the factors of personal reasons, motivation, and work performance have almost the same weights of importance. Regarding internal factors of the company, there is a high role of the factor of compensation and benefits, factors of interesting projects, team, and career opportunities. As for the external environment, one of the most dangerous factor of risk is competitor activities, meanwhile the political and economical situation factor also has a relatively high weight, which is easy to explain by the influence of severe crisis in Ukraine during 2014-2015. The presented method allows to take into consideration all main factors that affect the risk of investment in human capital of a company. This gives a base for further research in this field and allows for a creation of a practical framework for making decisions regarding the personnel development strategy and specific employees' development plans for the HR departments.

Keywords: risks, personnel development, investment in development, factors of risk, risk of investment in development, IT, analytic hierarchy process, AHP

Procedia PDF Downloads 300
2469 Identification of Configuration Space Singularities with Local Real Algebraic Geometry

Authors: Marc Diesse, Hochschule Heilbronn

Abstract:

We address the question of identifying the configuration space singularities of linkages, i.e., points where the configuration space is not locally a submanifold of Euclidean space. Because the configuration space cannot be smoothly parameterized at such points, these singularity types have a significantly negative impact on the kinematics of the linkage. It is known that Jacobian methods do not provide sufficient conditions for the existence of CS-singularities. Herein, we present several additional algebraic criteria that provide the sufficient conditions. Further, we use those criteria to analyze certain classes of planar linkages. These examples will also show how the presented criteria can be checked using algorithmic methods.

Keywords: linkages, configuration space-singularities, real algebraic geometry, analytic geometry

Procedia PDF Downloads 147
2468 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information

Procedia PDF Downloads 266
2467 Decision Support System for the Management and Maintenance of Sewer Networks

Authors: A. Bouamrane, M. T. Bouziane, K. Boutebba, Y. Djebbar

Abstract:

This paper aims to develop a decision support tool to provide solutions to the problems of sewer networks management/maintenance in order to assist the manager to sort sections upon priority of intervention by taking account of the technical, economic, social and environmental standards as well as the managers’ strategy. This solution uses the Analytic Network Process (ANP) developed by Thomas Saaty, coupled with a set of tools for modelling and collecting integrated data from a geographic information system (GIS). It provides to the decision maker a tool adapted to the reality on the ground and effective in usage compared to the means and objectives of the manager.

Keywords: multi-criteria decision support, maintenance, Geographic Information System, modelling

Procedia PDF Downloads 637
2466 Path Integrals and Effective Field Theory of Large Scale Structure

Authors: Revant Nayar

Abstract:

In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.

Keywords: quantum field theory, cosmology, effective field theory, renormallisation

Procedia PDF Downloads 135
2465 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 137
2464 A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching

Authors: Yuan Zheng

Abstract:

3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion.

Keywords: 3D-2D matching, fitness function, 3D vehicle model, local image gradient, silhouette information

Procedia PDF Downloads 399
2463 An Interpolation Tool for Data Transfer in Two-Dimensional Ice Accretion Problems

Authors: Marta Cordero-Gracia, Mariola Gomez, Olivier Blesbois, Marina Carrion

Abstract:

One of the difficulties in icing simulations is for extended periods of exposure, when very large ice shapes are created. As well as being large, they can have complex shapes, such as a double horn. For icing simulations, these configurations are currently computed in several steps. The icing step is stopped when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the costly human intervention in the re-meshing step of multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from surface mesh to volume mesh. This deformation tool has been developed specifically for icing problems. It is able to deal with localized, sharp and large deformations, unlike the tools traditionally used for more smooth wing deformations. This tool will be presented along with validation on typical two-dimensional icing shapes.

Keywords: ice accretion, interpolation, mesh deformation, radial basis functions

Procedia PDF Downloads 313
2462 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography

Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun

Abstract:

A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.

Keywords: fuzzy, membership functions, pornography, risk factors

Procedia PDF Downloads 129
2461 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 253
2460 Leadership and Whether It Stems from Innate Abilities or from Situation

Authors: Salwa Abdelbaki

Abstract:

This research investigated how leaders develop, asking whether they have been leaders due to their innate abilities or they gain leadership characteristics through interactions based on requirements of a situation. If the first is true, then a leader should be successful in any situation. Otherwise, a leader may succeed only in a specific situation. A series of experiments were carried out on three groups including of males and females. First; a group of 148 students with different specializations had to select a leader. Another group of 51 students had to recall their previous experiences and their knowledge of each other to identify who were leaders in different situations. Then a series of analytic tools were applied to the identified leaders and to the whole groups to find out how leaders were developed. A group of 40 young children was also experimented with to find young leaders among them and to analyze their characteristics.

Keywords: leadership, innate characteristics, situation, leadership theories

Procedia PDF Downloads 288
2459 Handwriting Velocity Modeling by Artificial Neural Networks

Authors: Mohamed Aymen Slim, Afef Abdelkrim, Mohamed Benrejeb

Abstract:

The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, writing surface, etc. Therefore, it is interesting to reconstruct an experimental basis records taking, as primary reference, the writing speed for different writers which would allow studying the global system during handwriting process. This paper deals with a new approach of the handwriting system modeling based on the velocity criterion through the concepts of artificial neural networks, precisely the Radial Basis Functions (RBF) neural networks. The obtained simulation results show a satisfactory agreement between responses of the developed neural model and the experimental data for various letters and forms then the efficiency of the proposed approaches.

Keywords: Electro Myo Graphic (EMG) signals, experimental approach, handwriting process, Radial Basis Functions (RBF) neural networks, velocity modeling

Procedia PDF Downloads 440
2458 Elastohydrodynamic Lubrication Study Using Discontinuous Finite Volume Method

Authors: Prawal Sinha, Peeyush Singh, Pravir Dutt

Abstract:

Problems in elastohydrodynamic lubrication have attracted a lot of attention in the last few decades. Solving a two-dimensional problem has always been a big challenge. In this paper, a new discontinuous finite volume method (DVM) for two-dimensional point contact Elastohydrodynamic Lubrication (EHL) problem has been developed and analyzed. A complete algorithm has been presented for solving such a problem. The method presented is robust and easily parallelized in MPI architecture. GMRES technique is implemented to solve the matrix obtained after the formulation. A new approach is followed in which discontinuous piecewise polynomials are used for the trail functions. It is natural to assume that the advantages of using discontinuous functions in finite element methods should also apply to finite volume methods. The nature of the discontinuity of the trail function is such that the elements in the corresponding dual partition have the smallest support as compared with the Classical finite volume methods. Film thickness calculation is done using singular quadrature approach. Results obtained have been presented graphically and discussed. This method is well suited for solving EHL point contact problem and can probably be used as commercial software.

Keywords: elastohydrodynamic, lubrication, discontinuous finite volume method, GMRES technique

Procedia PDF Downloads 257
2457 Argument Representation in Non-Spatial Motion Bahasa Melayu Based Conceptual Structure Theory

Authors: Nurul Jamilah Binti Rosly

Abstract:

The typology of motion must be understood as a change from one location to another. But from a conceptual point of view, motion can also occur in non-spatial contexts associated with human and social factors. Therefore, from the conceptual point of view, the concept of non-spatial motion involves the movement of time, ownership, identity, state, and existence. Accordingly, this study will focus on the lexical as shared, accept, be, store, and exist as the study material. The data in this study were extracted from the Database of Languages and Literature Corpus Database, Malaysia, which was analyzed using semantics and syntax concepts using Conceptual Structure Theory - Ray Jackendoff (2002). Semantic representations are represented in the form of conceptual structures in argument functions that include functions [events], [situations], [objects], [paths] and [places]. The findings show that the mapping of these arguments comprises three main stages, namely mapping the argument structure, mapping the tree, and mapping the role of thematic items. Accordingly, this study will show the representation of non- spatial Malay language areas.

Keywords: arguments, concepts, constituencies, events, situations, thematics

Procedia PDF Downloads 129
2456 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 132
2455 Optimal Load Factors for Seismic Design of Buildings

Authors: Juan Bojórquez, Sonia E. Ruiz, Edén Bojórquez, David de León Escobedo

Abstract:

A life-cycle optimization procedure to establish the best load factors combinations for seismic design of buildings, is proposed. The expected cost of damage from future earthquakes within the life of the structure is estimated, and realistic cost functions are assumed. The functions include: Repair cost, cost of contents damage, cost associated with loss of life, cost of injuries and economic loss. The loads considered are dead, live and earthquake load. The study is performed for reinforced concrete buildings located in Mexico City. The buildings are modeled as multiple-degree-of-freedom frame structures. The parameter selected to measure the structural damage is the maximum inter-story drift. The structural models are subjected to 31 soft-soil ground motions recorded in the Lake Zone of Mexico City. In order to obtain the annual structural failure rates, a numerical integration method is applied.

Keywords: load factors, life-cycle analysis, seismic design, reinforced concrete buildings

Procedia PDF Downloads 617
2454 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework

Authors: Ayat-Allah Bouramdane

Abstract:

Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.

Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability

Procedia PDF Downloads 173
2453 Supplier Selection by Considering Cost and Reliability

Authors: K. -H. Yang

Abstract:

Supplier selection problem is one of the important issues of supply chain problems. Two categories of methodologies include qualitative and quantitative approaches which can be applied to supplier selection problems. However, due to the complexities of the problem and lacking of reliable and quantitative data, qualitative approaches are more than quantitative approaches. This study considers operational cost and supplier’s reliability factor and solves the problem by using a quantitative approach. A mixed integer programming model is the primary analytic tool. Analyses of different scenarios with variable cost and reliability structures show that the effectiveness of this approach to the supplier selection problem.

Keywords: mixed integer programming, quantitative approach, supplier’s reliability, supplier selection

Procedia PDF Downloads 384