Search results for: convergence analysis
27827 Considering Partially Developed Artifacts in Change Impact Analysis Implementation
Authors: Nazri Kama, Sufyan Basri, Roslina Ibrahim
Abstract:
It is important to manage the changes in the software to meet the evolving needs of the customer. Accepting too many changes causes delay in the completion and it incurs additional cost. One type of information that helps to make the decision is through change impact analysis. Current impact analysis approaches assume that all classes in the class artifact are completely developed and the class artifact is used as a source of analysis. However, these assumptions are impractical for impact analysis in the software development phase as some classes in the class artifact are still under development or partially developed that leads to inaccuracy. This paper presents a novel impact analysis approach to be used in the software development phase. The significant achievements of the approach are demonstrated through an extensive experimental validation using three case studies.Keywords: software development, impact analysis, traceability, static analysis.
Procedia PDF Downloads 60627826 Minimum Pension Guarantee in Funded Pension Schemes: Theoretical Model and Global Implementation
Authors: Ishay Wolf
Abstract:
In this study, the financial position of pension actors in the market during the pension system transition toward a more funded capitalized scheme is explored, mainly via an option benefit model. This is enabled by not considering the economy as a single earning cohort. We analytically demonstrate a socio-economic anomaly in the funded pension system, which is in favor of high earning cohorts on at the expense of low earning cohorts. This anomaly is realized by a lack of insurance and exposure to financial and systemic risks. Furthermore, the anomaly might lead to pension re-reform back to unfunded scheme, mostly due to political pressure. We find that a minimum pension guarantee is a rebalance mechanism to this anomaly, which increases the probability to of the sustainable pension scheme. Specifically, we argue that implementing the guarantee with an intra-generational, risk-sharing mechanism is the most efficient way to reduce the effect of this abnormality. Moreover, we exhibit the convergence process toward implementing minimum pension guarantee in many countries which have capitalized their pension systems during the last three decades, particularly among Latin America and CEE countries.Keywords: benefits, pension scheme, put option, social security
Procedia PDF Downloads 12127825 On the Analysis of Pseudorandom Partial Quotient Sequences Generated from Continued Fractions
Authors: T. Padma, Jayashree S. Pillai
Abstract:
Random entities are an essential component in any cryptographic application. The suitability of a number theory based novel pseudorandom sequence called Pseudorandom Partial Quotient Sequence (PPQS) generated from the continued fraction expansion of irrational numbers, in cryptographic applications, is analyzed in this paper. An approach to build the algorithm around a hard mathematical problem has been considered. The PQ sequence is tested for randomness and its suitability as a cryptographic key by performing randomness analysis, key sensitivity and key space analysis, precision analysis and evaluating the correlation properties is established.Keywords: pseudorandom sequences, key sensitivity, correlation, security analysis, randomness analysis, sensitivity analysis
Procedia PDF Downloads 58927824 Efficient Monolithic FEM for Compressible Flow and Conjugate Heat Transfer
Authors: Santhosh A. K.
Abstract:
This work presents an efficient monolithic finite element strategy for solving thermo-fluid-structure interaction problems involving compressible fluids and linear-elastic structure. This formulation uses displacement variables for structure and velocity variables for the fluid, with no additional variables required to ensure traction, velocity, temperature, and heat flux continuity at the fluid-structure interface. Rate of convergence in each time step is quadratic, which is achieved in this formulation by deriving an exact tangent stiffness matrix. The robustness and good performance of the method is ascertained by applying the proposed strategy on a wide spectrum of problems taken from the literature pertaining to steady, transient, two dimensional, axisymmetric, and three dimensional fluid flow and conjugate heat transfer. It is shown that the current formulation gives excellent results on all the case studies conducted, which includes problems involving compressibility effects as well as problems where fluid can be treated as incompressible.Keywords: linear thermoelasticity, compressible flow, conjugate heat transfer, monolithic FEM
Procedia PDF Downloads 19827823 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 7427822 Impact on the Results of Sub-Group Analysis on Performance of Recommender Systems
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
The purpose of this study is to investigate whether friendship in social media can be an important factor in recommender system through social scientific analysis of friendship in popular social media such as Facebook and Twitter. For this purpose, this study analyzes data on friendship in real social media using component analysis and clique analysis among sub-group analysis in social network analysis. In this study, we propose an algorithm to reflect the results of sub-group analysis on the recommender system. The key to this algorithm is to ensure that recommendations from users in friendships are more likely to be reflected in recommendations from users. As a result of this study, outcomes of various subgroup analyzes were derived, and it was confirmed that the results were different from the results of the existing recommender system. Therefore, it is considered that the results of the subgroup analysis affect the recommendation performance of the system. Future research will attempt to generalize the results of the research through further analysis of various social data.Keywords: sub-group analysis, social media, social network analysis, recommender systems
Procedia PDF Downloads 36127821 Curriculum-Based Multi-Agent Reinforcement Learning for Robotic Navigation
Authors: Hyeongbok Kim, Lingling Zhao, Xiaohong Su
Abstract:
Deep reinforcement learning has been applied to address various problems in robotics, such as autonomous driving and unmanned aerial vehicle. However, because of the sparse reward penalty for a collision with obstacles during the navigation mission, the agent fails to learn the optimal policy or requires a long time for convergence. Therefore, using obstacles and enemy agents, in this paper, we present a curriculum-based boost learning method to effectively train compound skills during multi-agent reinforcement learning. First, to enable the agents to solve challenging tasks, we gradually increased learning difficulties by adjusting reward shaping instead of constructing different learning environments. Then, in a benchmark environment with static obstacles and moving enemy agents, the experimental results showed that the proposed curriculum learning strategy enhanced cooperative navigation and compound collision avoidance skills in uncertain environments while improving learning efficiency.Keywords: curriculum learning, hard exploration, multi-agent reinforcement learning, robotic navigation, sparse reward
Procedia PDF Downloads 9027820 Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules
Authors: Hirofumi Miyajima, Kazuya Kishida, Noritaka Shigei, Hiromi Miyajima
Abstract:
Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.Keywords: Box-Jenkins's problem, double-input rule module, fuzzy inference model, obstacle avoidance, single-input rule module
Procedia PDF Downloads 35227819 Blind Super-Resolution Reconstruction Based on PSF Estimation
Authors: Osama A. Omer, Amal Hamed
Abstract:
Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm
Procedia PDF Downloads 36327818 Sentiment Analysis: Comparative Analysis of Multilingual Sentiment and Opinion Classification Techniques
Authors: Sannikumar Patel, Brian Nolan, Markus Hofmann, Philip Owende, Kunjan Patel
Abstract:
Sentiment analysis and opinion mining have become emerging topics of research in recent years but most of the work is focused on data in the English language. A comprehensive research and analysis are essential which considers multiple languages, machine translation techniques, and different classifiers. This paper presents, a comparative analysis of different approaches for multilingual sentiment analysis. These approaches are divided into two parts: one using classification of text without language translation and second using the translation of testing data to a target language, such as English, before classification. The presented research and results are useful for understanding whether machine translation should be used for multilingual sentiment analysis or building language specific sentiment classification systems is a better approach. The effects of language translation techniques, features, and accuracy of various classifiers for multilingual sentiment analysis is also discussed in this study.Keywords: cross-language analysis, machine learning, machine translation, sentiment analysis
Procedia PDF Downloads 71127817 Sentiment Analysis in Social Networks Sites Based on a Bibliometrics Analysis: A Comprehensive Analysis and Trends for Future Research Planning
Authors: Jehan Fahim M. Alsulami
Abstract:
Academic research about sentiment analysis in sentiment analysis has obtained significant advancement over recent years and is flourishing from the collection of knowledge provided by various academic disciplines. In the current study, the status and development trend of the field of sentiment analysis in social networks is evaluated through a bibliometric analysis of academic publications. In particular, the distributions of publications and citations, the distribution of subject, predominant journals, authors, countries are analyzed. The collaboration degree is applied to measure scientific connections from different aspects. Moreover, the keyword co-occurrence analysis is used to find out the major research topics and their evolutions throughout the time span. The area of sentiment analysis in social networks has gained growing attention in academia, with computer science and engineering as the top main research subjects. China and the USA provide the most to the area development. Authors prefer to collaborate more with those within the same nation. Among the research topics, newly risen topics such as COVID-19, customer satisfaction are discovered.Keywords: bibliometric analysis, sentiment analysis, social networks, social media
Procedia PDF Downloads 21727816 Representativity Based Wasserstein Active Regression
Authors: Benjamin Bobbia, Matthias Picard
Abstract:
In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression
Procedia PDF Downloads 8027815 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality
Authors: Christian T. Lystbaek
Abstract:
Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.Keywords: public services innovation management, public value, co-creation, action research
Procedia PDF Downloads 27827814 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction
Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter
Abstract:
Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA
Procedia PDF Downloads 17627813 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 27427812 Metaverse in Future Personal Healthcare Industry: From Telemedicine to Telepresence
Authors: Mohammed Saeed Jawad
Abstract:
Metaverse involves the convergence of three major technologies trends of AI, VR, and AR. Together these three technologies can provide an entirely new channel for delivering healthcare with great potential to lower costs and improve patient outcomes on a larger scale. Telepresence is the technology that allows people to be together even if they are physically apart. Medical doctors can be symbolic as interactive avatars developed to have smart conversations and medical recommendations for patients at the different stages of the treatment. Medical digital assets such as Medical IoT for real-time remote healthcare monitoring as well as the symbolic doctors’ avatars as well as the hospital and clinical physical constructions and layout can be immersed in extended realities 3D metaverse environments where doctors, nurses, and patients can interact and socialized with the related digital assets that facilitate the data analytics of the sensed and collected personal medical data with visualized interaction of the digital twin of the patient’s body as well as the medical doctors' smart conversation and consultation or even in a guided remote-surgery operation.Keywords: personal healthcare, metaverse, telemedicine, telepresence, avatar, medical consultation, remote-surgery
Procedia PDF Downloads 13327811 A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks
Authors: Rejab Hajlaoui, Tarek Moulahi, Hervé Guyennet
Abstract:
In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput.Keywords: communication, clustering algorithm, k-medoids, sensor, vehicular ad hoc network
Procedia PDF Downloads 23827810 Understanding the Roots of Third World Problems: A Historical and Philosophical Sociology
Authors: Yaser Riki
Abstract:
There are plenty of considerations about the Third World and developing countries, but one of the main issues regarding these areas is how we can study them. This article makes attention to a fundamental way of approaching this subject through the convergence of history, philosophy, and sociology in order to understand the complexity of the Third World countries. These three disciplines are naturally connected and integrated, but they have gradually separated. While sociology has originated from philosophy, this work is an attempt to generate a sociology that incorporates philosophy as well as history at its heart. This is descriptive-analytical research that searches the history of sociology to find works and theories that provide ideas for this purpose, including the sociology of knowledge and science, The German Ideology (Karl Marx and Friedrich Engels), The Protestant Ethic (Max Weber), Ideology and Utopia (Karl Mannheim) and Dialectic of Enlightenment (Horkheimer and Adorno) provide ideas needed for this purpose. The paper offers a methodological and theoretical vision (historical-philosophical sociology) to identify a few factors, such as the system of thought, that are usually invisible and cause problems in societies, especially third-world counties. This is similar to what some of the founders of sociology did in the first world.Keywords: the third world, methodology, sociology, philosophy, history, social change, development, social movements
Procedia PDF Downloads 10427809 A Prospective Evaluation of Thermal Radiation Effects on Magneto-Hydrodynamic Transport of a Nanofluid Traversing a Spongy Medium
Authors: Azad Hussain, Shoaib Ali, M. Y. Malik, Saba Nazir, Sarmad Jamal
Abstract:
This article reports a fundamental numerical investigation to analyze the impact of thermal radiations on MHD flow of differential type nanofluid past a porous plate. Here, viscosity is taken as function of temperature. Energy equation is deliberated in the existence of viscous dissipation. The mathematical terminologies of nano concentration, velocity and temperature are first cast into dimensionless expressions via suitable conversions and then solved by using Shooting technique to obtain the numerical solutions. Graphs has been plotted to check the convergence of constructed solutions. At the end, the influence of effective parameters on nanoparticle concentration, velocity and temperature fields are also deliberated in a comprehensive way. Moreover, the physical measures of engineering importance such as the Sherwood number, Skin friction and Nusselt number are also calculated. It is perceived that the thermal radiation enhances the temperature for both Vogel's and Reynolds' models but the normal stress parameter causes a reduction in temperature profile.Keywords: MHD flow, differential type nanofluid, Porous medium, variable viscosity, thermal radiation
Procedia PDF Downloads 24027808 Documenting the Undocumented: Performing Counter-Narratives on Citizenship
Authors: Luis Pascasio
Abstract:
In a time when murky debates on US immigration policy are polarizing a nation steeped in partisan and nativist politics, certain media texts are proposing to challenge the dominant ways in which immigrant discourses are shaped in political debates. The paper will examine how two media texts perform counter-hegemonic discourses against institutionalized concepts on citizenship. The article looks at Documented (2014), a documentary film, written and directed by Jose Antonio Vargas, a Pulitzer-winning journalist-turned-activist and a self-proclaimed undocumented immigrant; and DefineAmerican.com, an online media platform that articulates the convergence of multiple voices and discourses about post-industrial and post-semiotic citizenship. As sites of meaning production, the two media texts perform counter-narratives that inspire new forms of mediated social activism and postcolonial identities. The paper argues that a closer introspection of the media texts reveals emotional, thematic and ideological claims to an interrogation of a diasporic discourse on redefining the rules of inclusion and exclusion within the postmodern dialogic of citizenship.Keywords: counter-narratives, documentary filmmaking, postmodern citizenship, diaspora media
Procedia PDF Downloads 32027807 Nonlinear Relationship between Globalization and Control of Corruption along with Economic Growth
Authors: Elnaz Entezar, Reza Ezzati
Abstract:
In recent decades, trade flows, capital, workforce, technology and information have increased between international borders and the globalization has turned to an undeniable process in international economics. Meanwhile, despite the positive aspects of globalization, the critics of globalization opine that the risks and costs of globalization for developing vulnerable economies and the world's impoverished people are high and significant. In this regard, this study by using the data of KOF Economic Institute and the World Bank for 113 different countries during the period 2002-2012, by taking advantage of panel smooth transition regression, and by taking the gross domestic product as transmission variables discuss the nonlinear relationship between research variables. The results have revealed that globalization in low regime (countries with low GDP) has negative impact whereas in high regime (countries with high GDP) has a positive impact. In spite of the fact that in the early stages of growth, control of corruption has a positive impact on economic growth, after a threshold has a negative impact on economic growth.Keywords: globalization, corruption, panel smooth transition model, economic growth, threshold, economic convergence
Procedia PDF Downloads 28927806 Shotcrete Performance Optimisation and Audit Using 3D Laser Scanning
Authors: Carlos Gonzalez, Neil Slatcher, Marcus Properzi, Kan Seah
Abstract:
In many underground mining operations, shotcrete is used for permanent rock support. Shotcrete thickness is a critical measure of the success of this process. 3D Laser Mapping, in conjunction with Jetcrete, has developed a 3D laser scanning system specifically for measuring the thickness of shotcrete. The system is mounted on the shotcrete spraying machine and measures the rock faces before and after spraying. The calculated difference between the two 3D surface models is measured as the thickness of the sprayed concrete. Typical work patterns for the shotcrete process required a rapid and automatic system. The scanning takes place immediately before and after the application of the shotcrete so no convergence takes place in the interval between scans. Automatic alignment of scans without targets was implemented which allows for the possibility of movement of the spraying machine between scans. Case studies are presented where accuracy tests are undertaken and automatic audit reports are calculated. The use of 3D imaging data for the calculation of shotcrete thickness is an important tool for geotechnical engineers and contract managers, and this could become the new state-of-the-art methodology for the mining industry.Keywords: 3D imaging, shotcrete, surface model, tunnel stability
Procedia PDF Downloads 29027805 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier
Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi
Abstract:
The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance
Procedia PDF Downloads 49227804 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. De sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of a higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses an artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of backpropagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this case iodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: artificial neural networks, biodiesel, iodine value, prediction
Procedia PDF Downloads 60527803 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models
Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif
Abstract:
This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function
Procedia PDF Downloads 39327802 A Study on Mesh Size Dependency on Bed Expansion Zone in a Three-Phase Fluidized Bed Reactor
Authors: Liliana Patricia Olivo Arias
Abstract:
The present study focused on the hydrodynamic study in a three-phase fluidized bed reactor and the influence of important aspects, such as volume fractions (Hold up), velocity magnitude of gas, liquid and solid phases (hydrogen, gasoil, and gamma alumina), interactions of phases, through of drag models with the k-epsilon turbulence model. For this purpose was employed a Euler-Euler model and also considers the system is constituted of three phases, gaseous, liquid and solid, characterized by its physical and thermal properties, the transport processes that are developed within the transient regime. The proposed model of the three-phase fluidized bed reactor was solved numerically using the ANSYS-Fluent software with different mesh refinements on bed expansion zone in order to observe the influence of the hydrodynamic parameters and convergence criteria. With this model and the numerical simulations obtained for its resolution, it was possible to predict the results of the volume fractions (Hold ups) and the velocity magnitude for an unsteady system from the initial and boundaries conditions were established.Keywords: three-phase fluidized bed system, CFD simulation, mesh dependency study, hydrodynamic study
Procedia PDF Downloads 16327801 Large Eddy Simulation of Particle Clouds Using Open-Source CFD
Authors: Ruo-Qian Wang
Abstract:
Open-source CFD has become increasingly popular and promising. The recent progress in multiphase flow enables new CFD applications, which provides an economic and flexible research tool for complex flow problems. Our numerical study using four-way coupling Euler-Lagrangian Large-Eddy Simulations to resolve particle cloud dynamics with OpenFOAM and CFDEM will be introduced: The fractioned Navier-Stokes equations are numerically solved for fluid phase motion, solid phase motion is addressed by Lagrangian tracking for every single particle, and total momentum is conserved by fluid-solid inter-phase coupling. The grid convergence test was performed, which proves the current resolution of the mesh is appropriate. Then, we validated the code by comparing numerical results with experiments in terms of particle cloud settlement and growth. A good comparison was obtained showing reliability of the present numerical schemes. The time and height at phase separations were defined and analyzed for a variety of initial release conditions. Empirical formulas were drawn to fit the results.Keywords: four-way coupling, dredging, land reclamation, multiphase flows, oil spill
Procedia PDF Downloads 42827800 Polynomial Chaos Expansion Combined with Exponential Spline for Singularly Perturbed Boundary Value Problems with Random Parameter
Authors: W. K. Zahra, M. A. El-Beltagy, R. R. Elkhadrawy
Abstract:
So many practical problems in science and technology developed over the past decays. For instance, the mathematical boundary layer theory or the approximation of solution for different problems described by differential equations. When such problems consider large or small parameters, they become increasingly complex and therefore require the use of asymptotic methods. In this work, we consider the singularly perturbed boundary value problems which contain very small parameters. Moreover, we will consider these perturbation parameters as random variables. We propose a numerical method to solve this kind of problems. The proposed method is based on an exponential spline, Shishkin mesh discretization, and polynomial chaos expansion. The polynomial chaos expansion is used to handle the randomness exist in the perturbation parameter. Furthermore, the Monte Carlo Simulations (MCS) are used to validate the solution and the accuracy of the proposed method. Numerical results are provided to show the applicability and efficiency of the proposed method, which maintains a very remarkable high accuracy and it is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, polynomial chaos expansion, Shishkin mesh, two small parameters, exponential spline
Procedia PDF Downloads 16027799 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 17227798 Vibrations of Springboards: Mode Shape and Time Domain Analysis
Authors: Stefano Frassinelli, Alessandro Niccolai, Riccardo E. Zich
Abstract:
Diving is an important Olympic sport. In this sport, the effective performance of the athlete is related to his capability to interact correctly with the springboard. In fact, the elevation of the jump and the correctness of the dive are influenced by the vibrations of the board. In this paper, the vibrations of the springboard will be analyzed by means of typical tools for vibration analysis: Firstly, a modal analysis will be done on two different models of the springboard, then, these two model and another one will be analyzed with a time analysis, done integrating the equations of motion od deformable bodies. All these analyses will be compared with experimental data measured on a real springboard by means of a 6-axis accelerometer; these measurements are aimed to assess the models proposed. The acquired data will be analyzed both in frequency domain and in time domain.Keywords: springboard analysis, modal analysis, time domain analysis, vibrations
Procedia PDF Downloads 458