Search results for: traditional techniques
10046 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach
Authors: Elias K. Maragos, Petros E. Maravelakis
Abstract:
In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.Keywords: Dynamic Data Envelopment Analysis, DDEA, piecewise linear inputs, piecewise linear outputs
Procedia PDF Downloads 16210045 Determination of Complexity Level in Merged Irregular Transposition Cipher
Authors: Okike Benjamin, Garba Ejd
Abstract:
Today, it has been observed security of information along the superhighway is often compromised by those who are not authorized to have access to such information. In order to ensure the security of information along the superhighway, such information should be encrypted by some means to conceal the real meaning of the information. There are many encryption techniques out there in the market. However, some of these encryption techniques are often easily decrypted by adversaries. The researcher has decided to develop an encryption technique that may be more difficult to decrypt. This may be achieved by splitting the message to be encrypted into parts and encrypting each part separately and swapping the positions before transmitting the message along the superhighway. The method is termed Merged Irregular Transposition Cipher. Also, the research would determine the complexity level in respect to the number of splits of the message.Keywords: transposition cipher, merged irregular cipher, encryption, complexity level
Procedia PDF Downloads 34510044 Secure E-Voting Using Blockchain Technology
Authors: Barkha Ramteke, Sonali Ridhorkar
Abstract:
An election is an important event in all countries. Traditional voting has several drawbacks, including the expense of time and effort required for tallying and counting results, the cost of papers, arrangements, and everything else required to complete a voting process. Many countries are now considering online e-voting systems, but the traditional e-voting systems suffer a lack of trust. It is not known if a vote is counted correctly, tampered or not. A lack of transparency means that the voter has no assurance that his or her vote will be counted as they voted in elections. Electronic voting systems are increasingly using blockchain technology as an underlying storage mechanism to make the voting process more transparent and assure data immutability as blockchain technology grows in popularity. The transparent feature, on the other hand, may reveal critical information about applicants because all system users have the same entitlement to their data. Furthermore, because of blockchain's pseudo-anonymity, voters' privacy will be revealed, and third parties involved in the voting process, such as registration institutions, will be able to tamper with data. To overcome these difficulties, we apply Ethereum smart contracts into blockchain-based voting systems.Keywords: blockchain, AMV chain, electronic voting, decentralized
Procedia PDF Downloads 13910043 The Study of Strength and Weakness Points of Various Techniques for Calculating the Volume of Done Work in Civil Projects
Authors: Ali Fazeli Moslehabadi
Abstract:
One of the topics discussed in civil projects, during the execution of the project, which the continuous change of work volumes is usually the characteristics of these types of projects, is how to calculate the volume of done work. The difference in volumes announced by the execution unit with the estimated volume by the technical office unit, has direct effect on the announced progress of the project. This issue can show the progress of the project more or less than actual value and as a result making mistakes for stakeholders and project managers and misleading them. This article intends to introduce some practical methods for calculating the volume of done work in civil projects. It then reviews the strengths and weaknesses of each of them, in order to resolve these contradictions and conflicts.Keywords: technical skills, systemic skills, communication skills, done work volume calculation techniques
Procedia PDF Downloads 15710042 The Influence of Temperature on the Corrosion and Corrosion Inhibition of Steel in Hydrochloric Acid Solution: Thermodynamic Study
Authors: Fatimah Al-Hayazi, Ehteram. A. Noor, Aisha H. Moubaraki
Abstract:
The inhibitive effect of Securigera securidaca seed extract (SSE) on mild steel corrosion in 1 M HCl solution has been studied by weight loss and electrochemical techniques at four different temperatures. All techniques studied provided data that the studied extract does well at all temperatures, and its inhibitory action increases with increasing its concentration. SEM images indicate thin-film formation on mild steel when corroded in solutions containing 1 g L-1 of inhibitor either at low or high temperatures. The polarization studies showed that SSE acts as an anodic inhibitor. Both polarization and impedance techniques show an acceleration behaviour for SSE at concentrations ≤ 0.1 g L-1 at all temperatures. At concentrations ≥ 0.1 g L-1, the efficiency of SSE is dramatically increased with increasing concentration, and its value does not change appreciably with increasing temperature. It was found that all adsorption data obeyed Temkin adsorption isotherm. Kinetic activation and thermodynamic adsorption parameters are evaluated and discussed. The results revealed an endothermic corrosion process with an associative activation mechanism, while a comprehensive adsorption mechanism for SSE on mild steel surfaces is suggested, in which both physical and chemical adsorption are involved in the adsorption process. A good correlation between inhibitor constituents and their inhibitory action was obtained.Keywords: corrosion, inhibition of steel, hydrochloric acid, thermodynamic study
Procedia PDF Downloads 10010041 Aristotelian Techniques of Communication Used by Current Affairs Talk Shows in Pakistan for Creating Dramatic Effect to Trigger Emotional Relevance
Authors: Shazia Anwer
Abstract:
The current TV Talk Shows, especially on domestic politics in Pakistan are following the Aristotelian techniques, including deductive reasoning, three modes of persuasion, and guidelines for communication. The application of “Approximate Truth is also seen when Talk Show presenters create doubts against political personalities or national issues. Mainstream media of Pakistan, being a key carrier of narrative construction for the sake of the primary function of national consensus on regional and extended public diplomacy, is failing the purpose. This paper has highlighted the Aristotelian communication methodology, its purposes and its limitations for a serious discussion, and its connection to the mistrust among the Pakistani population regarding fake or embedded, funded Information. Data has been collected from 3 Pakistani TV Talk Shows and their analysis has been made by applying the Aristotelian communication method to highlight the core issues. Paper has also elaborated that current media education is impaired in providing transparent techniques to train the future journalist for a meaningful, thought-provoking discussion. For this reason, this paper has given an overview of HEC’s (Higher Education Commission) graduate-level Mass Com Syllabus for Pakistani Universities. The idea of ethos, logos, and pathos are the main components of TV Talk Shows and as a result, the educated audience is lacking trust in the mainstream media, which eventually generating feelings of distrust and betrayal in the society because productions look like the genre of Drama instead of facts and analysis thus the line between Current Affairs shows and Infotainment has become blurred. In the last section, practical implication to improve meaningfulness and transparency in the TV Talk shows has been suggested by replacing the Aristotelian communication method with the cognitive semiotic communication approach.Keywords: Aristotelian techniques of communication, current affairs talk shows, drama, Pakistan
Procedia PDF Downloads 20610040 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 9110039 Establishing the Microbial Diversity of Traditionally Prepared Rice Beer of Northeast India to Impact in Increasing Its Shelf Life
Authors: Shreya Borthakur, Adhar Sharma
Abstract:
The North-east states of India are well known for their age-old practice of preparing alcoholic beer from rice and millet. They do so in a traditional way by sprinkling starter cake (inoculum) on cooked rice or millet after which the fermentation starts and eventually, forms the beer. This starter cake has a rich composition of different microbes and medicinal herbs along with the powdered rice dough or maize dough with rice bran. The starter cake microbial composition has an important role in determining the microbial succession and metabolic secretions as the fermentation proceeds from the early to its late stage, thus, giving the beer a unique aroma, taste, and other sensory properties of traditionally prepared beer. Here, We have worked on identifying and characterizing the microbial community in the starter cakes prepared by the Monpa and Galo tribes of Arunachal Pradesh. A total of 18 microbial strains have been isolated from the starter cake of Monpa tribe, while 10 microbial isolates in that of Galo tribe. A metagenomic approach was applied to enumerate the cultural and non-cultural microbes present in the starter cakes prepared by the Monpa and Galo tribes of Arunachal Pradesh. The findings of the mini-project lays foundation to understand the role of microbes present in the starter cake in the beer’s fermentation process and will aide in future research on re-formulating the starter cakes to prevent the early spoilage of the ready to consume beer as the traditional rice beer has a short shelf-life. The paper concludes with the way forward being controlled CRISPR-Cas9.Keywords: fermentation, traditional beer, microbial succession, preservation, CRISPR-Cas, food microbiology
Procedia PDF Downloads 12610038 CAD Tool for Parametric Design modification of Yacht Hull Surface Models
Authors: Shahroz Khan, Erkan Gunpinar, Kemal Mart
Abstract:
Recently parametric design techniques became a vital concept in the field of Computer Aided Design (CAD), which helps to provide sophisticated platform to the designer in order to automate the design process in efficient time. In these techniques, design process starts by parameterizing the important features of design models (typically the key dimensions), with the implementation of design constraints. The design constraints help to retain the overall shape of the model while modifying its parameters. However, the process of initializing an appropriate number of design parameters and constraints is the crucial part of parametric design techniques, especially for complex surface models such as yacht hull. This paper introduces a method to create complex surface models in favor of parametric design techniques, a method to define the right number of parameters and respective design constraints, and a system to implement design parameters in contract to design constraints schema. For this, in our proposed approach the design process starts by dividing the yacht hull into three sections. Each section consists of different shape lines, which form the overall shape of yacht hull. The shape lines are created using Cubic Bezier Curves, which allow larger design flexibility. Design parameters and constraints are defined on the shape lines in 3D design space to facilitate the designers for better and individual handling of parameters. Afterwards, shape modifiers are developed, which allow the modification of each parameter while satisfying the respective set of criteria and design constraints. Such as, geometric continuities should be maintained between the shape lines of the three sections, fairness of the hull surfaces should be preserved after modification and while design modification, effect of a single parameter should be negligible on other parameters. The constraints are defined individually on shape lines of each section and mutually between the shape lines of two connecting sections. In order to validate and visualize design results of our shape modifiers, a real time graphic interface is created.Keywords: design parameter, design constraints, shape modifies, yacht hull
Procedia PDF Downloads 30110037 Acoustic Echo Cancellation Using Different Adaptive Algorithms
Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil
Abstract:
An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)
Procedia PDF Downloads 8010036 Modelling the Dynamics of Corporate Bonds Spreads with Asymmetric GARCH Models
Authors: Sélima Baccar, Ephraim Clark
Abstract:
This paper can be considered as a new perspective to analyse credit spreads. A comprehensive empirical analysis of conditional variance of credit spreads indices is performed using various GARCH models. Based on a comparison between traditional and asymmetric GARCH models with alternative functional forms of the conditional density, we intend to identify what macroeconomic and financial factors have driven daily changes in the US Dollar credit spreads in the period from January 2011 through January 2013. The results provide a strong interdependence between credit spreads and the explanatory factors related to the conditions of interest rates, the state of the stock market, the bond market liquidity and the exchange risk. The empirical findings support the use of asymmetric GARCH models. The AGARCH and GJR models outperform the traditional GARCH in credit spreads modelling. We show, also, that the leptokurtic Student-t assumption is better than the Gaussian distribution and improves the quality of the estimates, whatever the rating or maturity.Keywords: corporate bonds, default risk, credit spreads, asymmetric garch models, student-t distribution
Procedia PDF Downloads 47510035 Deep Reinforcement Learning Model for Autonomous Driving
Authors: Boumaraf Malak
Abstract:
The development of intelligent transportation systems (ITS) and artificial intelligence (AI) are spurring us to pave the way for the widespread adoption of autonomous vehicles (AVs). This is open again opportunities for smart roads, smart traffic safety, and mobility comfort. A highly intelligent decision-making system is essential for autonomous driving around dense, dynamic objects. It must be able to handle complex road geometry and topology, as well as complex multiagent interactions, and closely follow higher-level commands such as routing information. Autonomous vehicles have become a very hot research topic in recent years due to their significant ability to reduce traffic accidents and personal injuries. Using new artificial intelligence-based technologies handles important functions in scene understanding, motion planning, decision making, vehicle control, social behavior, and communication for AV. This paper focuses only on deep reinforcement learning-based methods; it does not include traditional (flat) planar techniques, which have been the subject of extensive research in the past because reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. The DRL algorithm used so far found solutions to the four main problems of autonomous driving; in our paper, we highlight the challenges and point to possible future research directions.Keywords: deep reinforcement learning, autonomous driving, deep deterministic policy gradient, deep Q-learning
Procedia PDF Downloads 8510034 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 1810033 Flow Visualization in Biological Complex Geometries for Personalized Medicine
Authors: Carlos Escobar-del Pozo, César Ahumada-Monroy, Azael García-Rebolledo, Alberto Brambila-Solórzano, Gregorio Martínez-Sánchez, Luis Ortiz-Rincón
Abstract:
Numerical simulations of flow in complex biological structures have gained considerable attention in the last years. However, the major issue is the validation of the results. The present work shows a Particle Image Velocimetry PIV flow visualization technique in complex biological structures, particularly in intracranial aneurysms. A methodology to reconstruct and generate a transparent model has been developed, as well as visualization and particle tracking techniques. The generated transparent models allow visualizing the flow patterns with a regular camera using the visualization techniques. The final goal is to use visualization as a tool to provide more information on the treatment and surgery decisions in aneurysms.Keywords: aneurysms, PIV, flow visualization, particle tracking
Procedia PDF Downloads 9210032 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences
Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan
Abstract:
Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement
Procedia PDF Downloads 30710031 Natural Fibre Composite Structural Sections for Residential Stud Wall Applications
Authors: Mike R. Bambach
Abstract:
Increasing awareness of environmental concerns is leading a drive towards more sustainable structural products for the built environment. Natural fibres such as flax, jute and hemp have recently been considered for fibre-resin composites, with a major motivation for their implementation being their notable sustainability attributes. While recent decades have seen substantial interest in the use of such natural fibres in composite materials, much of this research has focused on the materials aspects, including fibre processing techniques, composite fabrication methodologies, matrix materials and their effects on the mechanical properties. The present study experimentally investigates the compression strength of structural channel sections of flax, jute and hemp, with a particular focus on their suitability for residential stud wall applications. The section geometry is optimised for maximum strength via the introduction of complex stiffeners in the webs and flanges. Experimental results on both natural fibre composite channel sections and typical steel and timber residential wall studs are compared. The geometrically optimised natural fibre composite channels are shown to have compression capacities suitable for residential wall stud applications, identifying them as a potentially viable alternative to traditional building materials in such application, and potentially other light structural applications.Keywords: channel sections, natural fibre composites, residential stud walls, structural composites
Procedia PDF Downloads 31410030 AI-Driven Forecasting Models for Anticipating Oil Market Trends and Demand
Authors: Gaurav Kumar Sinha
Abstract:
The volatility of the oil market, influenced by geopolitical, economic, and environmental factors, presents significant challenges for stakeholders in predicting trends and demand. This article explores the application of artificial intelligence (AI) in developing robust forecasting models to anticipate changes in the oil market more accurately. We delve into various AI techniques, including machine learning, deep learning, and time series analysis, that have been adapted to analyze historical data and current market conditions to forecast future trends. The study evaluates the effectiveness of these models in capturing complex patterns and dependencies in market data, which traditional forecasting methods often miss. Additionally, the paper discusses the integration of external variables such as political events, economic policies, and technological advancements that influence oil prices and demand. By leveraging AI, stakeholders can achieve a more nuanced understanding of market dynamics, enabling better strategic planning and risk management. The article concludes with a discussion on the potential of AI-driven models in enhancing the predictive accuracy of oil market forecasts and their implications for global economic planning and strategic resource allocation.Keywords: AI forecasting, oil market trends, machine learning, deep learning, time series analysis, predictive analytics, economic factors, geopolitical influence, technological advancements, strategic planning
Procedia PDF Downloads 3610029 The Effectiveness of Video Clips to Enhance Students’ Achievement and Motivation on History Learning and Facilitation
Authors: L. Bih Ni, D. Norizah Ag Kiflee, T. Choon Keong, R. Talip, S. Singh Bikar Singh, M. Noor Mad Japuni, R. Talin
Abstract:
The purpose of this study is to determine the effectiveness of video clips to enhance students' achievement and motivation towards learning and facilitating of history. We use narrative literature studies to illustrate the current state of the two art and science in focused areas of inquiry. We used experimental method. The experimental method is a systematic scientific research method in which the researchers manipulate one or more variables to control and measure any changes in other variables. For this purpose, two experimental groups have been designed: one experimental and one groups consisting of 30 lower secondary students. The session is given to the first batch using a computer presentation program that uses video clips to be considered as experimental group, while the second group is assigned as the same class using traditional methods using dialogue and discussion techniques that are considered a control group. Both groups are subject to pre and post-trial in matters that are handled by the class. The findings show that the results of the pre-test analysis did not show statistically significant differences, which in turn proved the equality of the two groups. Meanwhile, post-test analysis results show that there was a statistically significant difference between the experimental group and the control group at an importance level of 0.05 for the benefit of the experimental group.Keywords: Video clips, Learning and Facilitation, Achievement, Motivation
Procedia PDF Downloads 15310028 Electrochemical Study of Copper–Tin Alloy Nucleation Mechanisms onto Different Substrates
Authors: Meriem Hamla, Mohamed Benaicha, Sabrine Derbal
Abstract:
In the present work, several materials such as M/glass (M = Pt, Mo) were investigated to test their suitability for studying the early nucleation stages and growth of copper-tin clusters. It was found that most of these materials stand as good substrates to be used in the study of the nucleation and growth of electrodeposited Cu-Sn alloys from aqueous solution containing CuCl2, SnCl2 as electroactive species and Na3C6H5O7 as complexing agent. Among these substrates, Pt shows instantaneous models followed by 3D diffusion-limited growth. On the other hand, the electrodeposited copper-tin thin films onto Mo substrate followed progressive nucleation. The deposition mechanism of the Cu-Sn films has been studied using stationary electrochemical techniques (cyclic voltammetery (CV) and chronoamperometry (CA). The structural, morphological and compositional of characterization have been studied using X-ray diffraction (XRD), scanning electron microscopy (SEM) and EDAX techniques respectively.Keywords: electrodeposition, CuSn, nucleation, mechanism
Procedia PDF Downloads 39810027 A Survey on Traditional Mac Layer Protocols in Cognitive Wireless Mesh Networks
Authors: Anusha M., V. Srikanth
Abstract:
Maximizing spectrum usage and numerous applications of the wireless communication networks have forced to a high interest of available spectrum. Cognitive Radio control its receiver and transmitter features exactly so that they can utilize the vacant approved spectrum without impacting the functionality of the principal licensed users. The Use of various channels assists to address interferences thereby improves the whole network efficiency. The MAC protocol in cognitive radio network explains the spectrum usage by interacting with multiple channels among the users. In this paper we studied about the architecture of cognitive wireless mesh network and traditional TDMA dependent MAC method to allocate channels dynamically. The majority of the MAC protocols suggested in the research are operated on Common-Control-Channel (CCC) to handle the services between Cognitive Radio secondary users. In this paper, an extensive study of Multi-Channel Multi-Radios or frequency range channel allotment and continually synchronized TDMA scheduling are shown in summarized way.Keywords: TDMA, MAC, multi-channel, multi-radio, WMN’S, cognitive radios
Procedia PDF Downloads 56210026 Electronic Six-Minute Walk Test (E-6MWT): Less Manpower, Higher Efficiency, and Better Data Management
Authors: C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, T. K. Chui, L. Y. Chan, K. W. Lee, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow
Abstract:
Six-minute walk test (6MWT) is a sub-maximal exercise test to assess aerobic capacity and exercise tolerance of patients with chronic respiratory disease and heart failure. This has been proven to be a reliable and valid tool and commonly used in clinical situations. Traditional 6MWT is labour-intensive and time-consuming especially for patients who require assistance in ambulation and oxygen use. When performing the test with these patients, one staff will assist the patient in walking (with or without aids) while another staff will need to manually record patient’s oxygen saturation, heart rate and walking distance at every minute and/or carry oxygen cylinder at the same time. Physiotherapist will then have to document the test results in bed notes in details. With the use of electronic 6MWT (E-6MWT), patients wear a wireless oximeter that transfers data to a tablet PC via Bluetooth. Real-time recording of oxygen saturation, heart rate, and distance are displayed. No manual work on recording is needed. The tablet will generate a comprehensive report which can be directly attached to the patient’s bed notes for documentation. Data can also be saved for later patient follow up. This study was carried out in North District Hospital. Patients who followed commands and required 6MWT assessment were included. Patients were assigned to study or control groups. In the study group, patients adopted the E-6MWT while those in control group adopted the traditional 6MWT. Manpower and time consumed were recorded. Physiotherapists also completed a questionnaire about the use of E-6MWT. Total 12 subjects (Study=6; Control=6) were recruited during 11-12/2017. An average number of staff required and time consumed in traditional 6MWT were 1.67 and 949.33 seconds respectively; while in E-6MWT, the figures were 1.00 and 630.00 seconds respectively. Compared to traditional 6MWT, E-6MWT required 67.00% less manpower and 50.10% less in time spent. Physiotherapists (n=7) found E-6MWT is convenient to use (mean=5.14; satisfied to very satisfied), requires less manpower and time to complete the test (mean=4.71; rather satisfied to satisfied), has better data management (mean=5.86; satisfied to very satisfied) and is recommended to be used clinically (mean=5.29; satisfied to very satisfied). It is proven that E-6MWT requires less manpower input with higher efficiency and better data management. It is welcomed by the clinical frontline staff.Keywords: electronic, physiotherapy, six-minute walk test, 6MWT
Procedia PDF Downloads 15410025 Empirical Acceleration Functions and Fuzzy Information
Authors: Muhammad Shafiq
Abstract:
In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data
Procedia PDF Downloads 30110024 Influence of Urban Fabric on Child’s Upbringing: A Comparative Analysis between Modern and Traditional City
Authors: Mohamed A. Tantawy, Nourelhoda A. Hussein, Moataz A. Mahrous
Abstract:
New planning and city design theories are continuously debated and optimized for seeking efficiency and adequacy in economic and life quality aspects. Here, we examine the children-city relationship, to reflect on how modern and traditional cities affect the social climate. We adopt children as a proper caliber for urbanism, as for their very young age, they are independent and attached to family. Their fragility offers a chance to gauge how various urban settings directly affect their feeling of safety, containment, and their perception of belonging for home territory. The importance of street play for the child development process is discussed thoroughly. The authority they have on their play (when and what to play) pushes us to our conclusion. A mediocre built environment characterized by spontaneity and human-scale semi-private urban spaces, is irreplaceable by a perfectly designed far away playgrounds. Street play has a huge role in empowering children for a gradual engagement with grown-ups’ urban flow.Keywords: child's psychology, social activity, street play, urban fabric
Procedia PDF Downloads 31510023 Improving Traditional Methods of Handling Fish from Integrated Pond Culture Systems in Monai Village, New Bussa, Nigeria
Authors: Olokor O. Julius, Ngwu E. Onyebuchi, Ajani K. Emmanuel, Omitoyin O. Bamidele, Olokor O. Linda, Akomas Stella
Abstract:
The study assessed the quality changes of Clarias gariepenus obtained from integrated culture systems (rice, poultry and fish) which were displayed at 31-33oC average daily temperature on the traditional market table used by local fish farmers to sell fish harvested from their ponds and those on an improved table designed for this study. Unlike the conventional table, the improved table was screened against flies and indiscriminate touch by customers. The fishes were displayed on both tables for 9 hours and quality attributes were monitored hourly by trained panelists. For C. gariepinus, the gills, and intestine recorded faster deterioration starting from the fourth and fifth hours while those on the improved table were prolonged by one hour. Scores for skin brightness and texture did not indicate quality deterioration throughout the display period. However, at the end of the storage time, samples on the improved table recorded 1.5 x 104 cfu/g while samples in unscreened table recorded 3.7 x 10 7 cfu/g. The study shows how simple modifications of a traditional practice can help extend keeping qualities of farmed fish, reduce health hazards in local communities where there is no electricity to preserve fish in whatever form despite a boom in aquaculture. Monai community has a fish farm estate of over 200 small holder farmers with annual output capacity of over $10 million dollars. The simple improvement made to farmers practice in this study is to ensure Community hygiene and boost income of peasant fish farmers by improving the market quality of their products.Keywords: fish spoilage, improved handling, income generation, retail table
Procedia PDF Downloads 44810022 Peat Soil Stabilization Methods: A Review
Authors: Mohammad Saberian, Mohammad Ali Rahgozar, Reza Porhoseini
Abstract:
Peat soil is formed naturally through the accumulation of organic matter under water and it consists of more than 75% organic substances. Peat is considered to be in the category of problematic soil, which is not suitable for construction, due to its high compressibility, high moisture content, low shear strength, and low bearing capacity. Since this kind of soil is generally found in many countries and different regions, finding desirable techniques for stabilization of peat is absolutely essential. The purpose of this paper is to review the various techniques applied for stabilizing peat soil and discuss outcomes of its improved mechanical parameters and strength properties. Recognizing characterization of stabilized peat is one of the most significant factors for architectural structures; as a consequence, various strategies for stabilization of this susceptible soil have been examined based on the depth of peat deposit.Keywords: peat soil, stabilization, depth, strength, unconfined compressive strength (USC)
Procedia PDF Downloads 57510021 A Unique Multi-Class Support Vector Machine Algorithm Using MapReduce
Authors: Aditi Viswanathan, Shree Ranjani, Aruna Govada
Abstract:
With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research seeks to develop an algorithm that implements the Support Vector Machine over a multi-class data set and is efficient in a distributed environment. For this, we recursively choose the best binary split of a set of classes using a greedy technique. Much like the divide and conquer approach. Our algorithm has shown better computation time during the testing phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the data set grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.Keywords: distributed algorithm, MapReduce, multi-class, support vector machine
Procedia PDF Downloads 40110020 Using Machine Learning as an Alternative for Predicting Exchange Rates
Authors: Pedro Paulo Galindo Francisco, Eli Dhadad Junior
Abstract:
This study addresses the Meese-Rogoff Puzzle by introducing the latest machine learning techniques as alternatives for predicting the exchange rates. Using RMSE as a comparison metric, Meese and Rogoff discovered that economic models are unable to outperform the random walk model as short-term exchange rate predictors. Decades after this study, no statistical prediction technique has proven effective in overcoming this obstacle; although there were positive results, they did not apply to all currencies and defined periods. Recent advancements in artificial intelligence technologies have paved the way for a new approach to exchange rate prediction. Leveraging this technology, we applied five machine learning techniques to attempt to overcome the Meese-Rogoff puzzle. We considered daily data for the real, yen, British pound, euro, and Chinese yuan against the US dollar over a time horizon from 2010 to 2023. Our results showed that none of the presented techniques were able to produce an RMSE lower than the Random Walk model. However, the performance of some models, particularly LSTM and N-BEATS were able to outperform the ARIMA model. The results also suggest that machine learning models have untapped potential and could represent an effective long-term possibility for overcoming the Meese-Rogoff puzzle.Keywords: exchage rate, prediction, machine learning, deep learning
Procedia PDF Downloads 3310019 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm
Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma
Abstract:
In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA) is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.Keywords: balanced truncation, clustering, dominant pole, Hankel norm, model reduction
Procedia PDF Downloads 59910018 Effect of Local Processing Techniques on the Nutrients and Anti-Nutrients Content of Bitter Cassava (Manihot Esculenta Crantz)
Authors: J. S. Alakali, A. R. Ismaila, T. G. Atume
Abstract:
The effects of local processing techniques on the nutrients and anti-nutrients content of bitter cassava were investigated. Raw bitter cassava tubers were boiled, sundried, roasted, fried to produce Kuese, partially fermented and sun dried to produce Alubo, fermented by submersion to produce Akpu and fermented by solid state to produce yellow and white gari. These locally processed cassava products were subjected to proximate, mineral analysis and anti-nutrient analysis using standard methods. The result of the proximate analysis showed that, raw bitter cassava is composed of 1.85% ash, 20.38% moisture, 4.11% crude fibre, 1.03% crude protein, 0.66% lipids and 71.88% total carbohydrate. For the mineral analysis, the raw bitter cassava tuber contained 32.00% Calcium, 12.55% Magnesium, 1.38% Iron and 80.17% Phosphorous. Even though all processing techniques significantly increased the mineral content, fermentation had higher mineral increment effect. The anti-nutrients analysis showed that the raw tuber contained 98.16mg/100g cyanide, 44.00mg/100g oxalate 304.20mg/100g phytate and 73.00mg/100g saponin. In general all the processing techniques showed a significant reduction of the phytate, oxalate and saponin content of the cassava. However, only fermentation, sun drying and gasification were able to reduce the cyanide content of bitter cassava below the safe level (10mg/100g) recommended by Standard Organization of Nigeria. Yellow gari(with the addition of palm oil) showed low cyanide content (1.10 mg/100g) than white gari (3.51 mg/100g). Processing methods involving fermentation reduce cyanide and other anti-nutrients in the cassava to levels that are safe for consumption and should be widely practiced.Keywords: bitter cassava, local processing, fermentation, anti-nutrient.
Procedia PDF Downloads 30410017 Perspectives of Computational Modeling in Sanskrit Lexicons
Authors: Baldev Ram Khandoliyan, Ram Kishor
Abstract:
India has a classical tradition of Sanskrit Lexicons. Research work has been done on the study of Indian lexicography. India has seen amazing strides in Information and Communication Technology (ICT) applications for Indian languages in general and for Sanskrit in particular. Since Machine Translation from Sanskrit to other Indian languages is often the desired goal, traditional Sanskrit lexicography has attracted a lot of attention from the ICT and Computational Linguistics community. From Nighaŋţu and Nirukta to Amarakośa and Medinīkośa, Sanskrit owns a rich history of lexicography. As these kośas do not follow the same typology or standard in the selection and arrangement of the words and the information related to them, several types of Kośa-styles have emerged in this tradition. The model of a grammar given by Aṣṭādhyāyī is well appreciated by Indian and western linguists and grammarians. But the different models provided by lexicographic tradition also have importance. The general usefulness of Sanskrit traditional Kośas is well discussed by some scholars. That is most of the matter made available in the text. Some also have discussed the good arrangement of lexica. This paper aims to discuss some more use of the different models of Sanskrit lexicography especially focusing on its computational modeling and its use in different computational operations.Keywords: computational lexicography, Sanskrit Lexicons, nighanṭu, kośa, Amarkosa
Procedia PDF Downloads 165