Search results for: Deep Neural Network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6670

Search results for: Deep Neural Network

4690 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 184
4689 Agent Based Location Management Protocol for Mobile Adhoc Networks

Authors: Mallikarjun B. Channappagoudar, Pallapa Venkataram

Abstract:

The dynamic nature of Mobile adhoc network (MANET) due to mobility and disconnection of mobile nodes, leads to various problems in predicting the movement of nodes and their location information updation, for efficient interaction among the application specific nodes. Location management is one of the main challenges to be considered for an efficient service provision to the applications of a MANET. In this paper, we propose a location management protocol, for locating the nodes of a MANET and to maintain uninterrupted high-quality service for distributed applications by intelligently anticipating the change of location of its nodes. The protocol predicts the node movement and application resource scarcity, does the replacement with the chosen nodes nearby which have less mobility and rich in resources, with the help of both static and mobile agents, and maintains the application continuity by providing required network resources. The protocol has been simulated using Java Agent Development Environment (JADE) Framework for agent generation, migration and communication. It consumes much less time (response time), gives better location accuracy, utilize less network resources, and reduce location management overhead.

Keywords: mobile agent, location management, distributed applications, mobile adhoc network

Procedia PDF Downloads 387
4688 Nafion Nanofiber Composite Membrane Fabrication for Fuel Cell Applications

Authors: C. N. Okafor, M. Maaza, T. A. E. Mokrani

Abstract:

A proton exchange membrane has been developed for Direct Methanol Fuel Cell (DMFC). The nanofiber network composite membranes were prepared by interconnected network of Nafion (perfuorosulfonic acid) nanofibers that have been embedded in an uncharged and inert polymer matrix, by electro-spinning. The spinning solution of Nafion with a low concentration (1 wt. % compared to Nafion) of high molecular weight poly(ethylene oxide), as a carrier polymer. The interconnected network of Nafion nanofibers with average fiber diameter in the range of 160-700nm, were used to make the membranes, with the nanofiber occupying up to 85% of the membrane volume. The matrix polymer was cross-linked with Norland Optical Adhesive 63 under UV. The resulting membranes showed proton conductivity of 0.10 S/cm at 25°C and 80% RH; and methanol permeability of 3.6 x 10-6 cm2/s.

Keywords: composite membrane, electrospinning, fuel cell, nanofibers

Procedia PDF Downloads 261
4687 Feasibility of Washing/Extraction Treatment for the Remediation of Deep-Sea Mining Trailings

Authors: Kyoungrean Kim

Abstract:

Importance of deep-sea mineral resources is dramatically increasing due to the depletion of land mineral resources corresponding to increasing human’s economic activities. Korea has acquired exclusive exploration licenses at four areas which are the Clarion-Clipperton Fracture Zone in the Pacific Ocean (2002), Tonga (2008), Fiji (2011) and Indian Ocean (2014). The preparation for commercial mining of Nautilus minerals (Canada) and Lockheed martin minerals (USA) is expected by 2020. The London Protocol 1996 (LP) under International Maritime Organization (IMO) and International Seabed Authority (ISA) will set environmental guidelines for deep-sea mining until 2020, to protect marine environment. In this research, the applicability of washing/extraction treatment for the remediation of deep-sea mining tailings was mainly evaluated in order to present preliminary data to develop practical remediation technology in near future. Polymetallic nodule samples were collected at the Clarion-Clipperton Fracture Zone in the Pacific Ocean, then stored at room temperature. Samples were pulverized by using jaw crusher and ball mill then, classified into 3 particle sizes (> 63 µm, 63-20 µm, < 20 µm) by using vibratory sieve shakers (Analysette 3 Pro, Fritsch, Germany) with 63 µm and 20 µm sieve. Only the particle size 63-20 µm was used as the samples for investigation considering the lower limit of ore dressing process which is tens to 100 µm. Rhamnolipid and sodium alginate as biosurfactant and aluminum sulfate which are mainly used as flocculant were used as environmentally friendly additives. Samples were adjusted to 2% liquid with deionized water then mixed with various concentrations of additives. The mixture was stirred with a magnetic bar during specific reaction times and then the liquid phase was separated by a centrifugal separator (Thermo Fisher Scientific, USA) under 4,000 rpm for 1 h. The separated liquid was filtered with a syringe and acrylic-based filter (0.45 µm). The extracted heavy metals in the filtered liquid were then determined using a UV-Vis spectrometer (DR-5000, Hach, USA) and a heat block (DBR 200, Hach, USA) followed by US EPA methods (8506, 8009, 10217 and 10220). Polymetallic nodule was mainly composed of manganese (27%), iron (8%), nickel (1.4%), cupper (1.3 %), cobalt (1.3%) and molybdenum (0.04%). Based on remediation standards of various countries, Nickel (Ni), Copper (Cu), Cadmium (Cd) and Zinc (Zn) were selected as primary target materials. Throughout this research, the use of rhamnolipid was shown to be an effective approach for removing heavy metals in samples originated from manganese nodules. Sodium alginate might also be one of the effective additives for the remediation of deep-sea mining tailings such as polymetallic nodules. Compare to the use of rhamnolipid and sodium alginate, aluminum sulfate was more effective additive at short reaction time within 4 h. Based on these results, sequencing particle separation, selective extraction/washing, advanced filtration of liquid phase, water treatment without dewatering and solidification/stabilization may be considered as candidate technologies for the remediation of deep-sea mining tailings.

Keywords: deep-sea mining tailings, heavy metals, remediation, extraction, additives

Procedia PDF Downloads 151
4686 Evaluating the Use of Manned and Unmanned Aerial Vehicles in Strategic Offensive Tasks

Authors: Yildiray Korkmaz, Mehmet Aksoy

Abstract:

In today's operations, countries want to reach their aims in the shortest way due to economical, political and humanitarian aspects. The most effective way of achieving this goal is to be able to penetrate strategic targets. Strategic targets are generally located deep inside of the countries and are defended by modern and efficient surface to air missiles (SAM) platforms which are operated as integrated with Intelligence, Surveillance and Reconnaissance (ISR) systems. On the other hand, these high valued targets are buried deep underground and hardened with strong materials against attacks. Therefore, to penetrate these targets requires very detailed intelligence. This intelligence process should include a wide range that is from weaponry to threat assessment. Accordingly, the framework of the attack package will be determined. This mission package has to execute missions in a high threat environment. The way to minimize the risk which depends on loss of life is to use packages which are formed by UAVs. However, some limitations arising from the characteristics of UAVs restricts the performance of the mission package consisted of UAVs. So, the mission package should be formed with UAVs under the leadership of a fifth generation manned aircraft. Thus, we can minimize the limitations, easily penetrate in the deep inside of the enemy territory with minimum risk, make a decision according to ever-changing conditions and finally destroy the strategic targets. In this article, the strengthens and weakness aspects of UAVs are examined by SWOT analysis. And also, it revealed features of a mission package and presented as an example what kind of a mission package we should form in order to get marginal benefit and penetrate into strategic targets with the development of autonomous mission execution capability in the near future.

Keywords: UAV, autonomy, mission package, strategic attack, mission planning

Procedia PDF Downloads 545
4685 Emotional Labor Strategies and Intentions to Quit among Nurses in Pakistan

Authors: Maham Malik, Amjad Ali, Muhammad Asif

Abstract:

Current study aims to examine the relationship of emotional labor strategies - deep acting and surface acting - with employees' job satisfaction, organizational commitment and intentions to quit. The study also examines the mediating role of job satisfaction and organizational commitment for relationship of emotional labor strategies with intentions to quit. Data were conveniently collected from 307 nurses by using self-administered questionnaire. Linear regression test was applied to find the relationship between the variables. Mediation was checked through Baron and Kenny Model and Sobel test. Results prove the existence of partial mediation of job satisfaction between the emotional labor strategies and quitting intentions. The study recommends that deep acting should be promoted because it is positively associated with quality of work life, work engagement and organizational citizenship behavior of employees.

Keywords: emotional labor strategies, intentions to quit, job satisfaction, organizational commitment, nursing

Procedia PDF Downloads 139
4684 Losing Benefits from Social Network Sites Usage: An Approach to Estimate the Relationship between Social Network Sites Usage and Social Capital

Authors: Maoxin Ye

Abstract:

This study examines the relationship between social network sites (SNS) usage and social capital. Because SNS usage can expand the users’ networks, and people who are connected in this networks may become resources to SNS users and lead them to advantage in some situation, it is important to estimate the relationship between SNS usage and ‘who’ is connected or what resources the SNS users can get. Additionally, ‘who’ can be divided in two aspects – people who possess high position and people who are different, hence, it is important to estimate the relationship between SNS usage and high position people and different people. This study adapts Lin’s definition of social capital and the measurement of position generator which tells us who was connected, and can be divided into the same two aspects as well. A national data of America (N = 2,255) collected by Pew Research Center is utilized to do a general regression analysis about SNS usage and social capital. The results indicate that SNS usage is negatively associated with each factor of social capital, and it suggests that, in fact, comparing with non-users, although SNS users can get more connections, the variety and resources of these connections are fewer. For this reason, we could lose benefits through SNS usage.

Keywords: social network sites, social capital, position generator, general regression

Procedia PDF Downloads 259
4683 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 269
4682 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa

Authors: Bernard Moeketsi Hlalele

Abstract:

Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.

Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa

Procedia PDF Downloads 119
4681 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 162
4680 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss

Procedia PDF Downloads 470
4679 Distribution Network Optimization by Optimal Placement of Photovoltaic-Based Distributed Generation: A Case Study of the Nigerian Power System

Authors: Edafe Lucky Okotie, Emmanuel Osawaru Omosigho

Abstract:

This paper examines the impacts of the introduction of distributed energy generation (DEG) technology into the Nigerian power system as an alternative means of energy generation at distribution ends using Otovwodo 15 MVA, 33/11kV injection substation as a case study. The overall idea is to increase the generated energy in the system, improve the voltage profile and reduce system losses. A photovoltaic-based distributed energy generator (PV-DEG) was considered and was optimally placed in the network using Genetic Algorithm (GA) in Mat. Lab/Simulink environment. The results of simulation obtained shows that the dynamic performance of the network was optimized with DEG-grid integration.

Keywords: distributed energy generation (DEG), genetic algorithm (GA), power quality, total load demand, voltage profile

Procedia PDF Downloads 78
4678 Intelligent Prediction of Breast Cancer Severity

Authors: Wahab Ali, Oyebade K. Oyedotun, Adnan Khashman

Abstract:

Breast cancer remains a threat to the woman’s world in view of survival rates, it early diagnosis and mortality statistics. So far, research has shown that many survivors of breast cancer cases are in the ones with early diagnosis. Breast cancer is usually categorized into stages which indicates its severity and corresponding survival rates for patients. Investigations show that the farther into the stages before diagnosis the lesser the chance of survival; hence the early diagnosis of breast cancer becomes imperative, and consequently the application of novel technologies to achieving this. Over the year, mammograms have used in the diagnosis of breast cancer, but the inconclusive deductions made from such scans lead to either false negative cases where cancer patients may be left untreated or false positive where unnecessary biopsies are carried out. This paper presents the application of artificial neural networks in the prediction of severity of breast tumour (whether benign or malignant) using mammography reports and other factors that are related to breast cancer.

Keywords: breast cancer, intelligent classification, neural networks, mammography

Procedia PDF Downloads 483
4677 Discovering User Behaviour Patterns from Web Log Analysis to Enhance the Accessibility and Usability of Website

Authors: Harpreet Singh

Abstract:

Finding relevant information on the World Wide Web is becoming highly challenging day by day. Web usage mining is used for the extraction of relevant and useful knowledge, such as user behaviour patterns, from web access log records. Web access log records all the requests for individual files that the users have requested from the website. Web usage mining is important for Customer Relationship Management (CRM), as it can ensure customer satisfaction as far as the interaction between the customer and the organization is concerned. Web usage mining is helpful in improving website structure or design as per the user’s requirement by analyzing the access log file of a website through a log analyzer tool. The focus of this paper is to enhance the accessibility and usability of a guitar selling web site by analyzing their access log through Deep Log Analyzer tool. The results show that the maximum number of users is from the United States and that they use Opera 9.8 web browser and the Windows XP operating system.

Keywords: web usage mining, web mining, log file, data mining, deep log analyzer

Procedia PDF Downloads 246
4676 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks

Authors: Ameen Jameel Alawneh

Abstract:

A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.

Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets

Procedia PDF Downloads 387
4675 An Integer Nonlinear Program Proposal for Intermodal Transportation Service Network Design

Authors: Laaziz El Hassan

Abstract:

The Service Network Design Problem (SNDP) is a tactical issue in freight transportation firms. The existing formulations of the problem for intermodal rail-road transportation were not always adapted to the intermodality in terms of full asset utilization and modal shift reinforcement. The objective of the article is to propose a model having a more compliant formulation with intermodality, including constraints highlighting the imperatives of asset management, reinforcing modal shift from road to rail and reducing, by the way, road mode CO2 emissions. The model is a fixed charged, path based integer nonlinear program. Its objective is to minimize services total cost while ensuring full assets utilization to satisfy freight demand forecast. The model's main feature is that it gives as output both the train sizes and the services frequencies for a planning period. We solved the program using a commercial solver and discussed the numerical results.

Keywords: intermodal transport network, service network design, model, nonlinear integer program, path-based, service frequencies, modal shift

Procedia PDF Downloads 113
4674 Computational Identification of Signalling Pathways in Protein Interaction Networks

Authors: Angela U. Makolo, Temitayo A. Olagunju

Abstract:

The knowledge of signaling pathways is central to understanding the biological mechanisms of organisms since it has been identified that in eukaryotic organisms, the number of signaling pathways determines the number of ways the organism will react to external stimuli. Signaling pathways are studied using protein interaction networks constructed from protein-protein interaction data obtained using high throughput experimental procedures. However, these high throughput methods are known to produce very high rates of false positive and negative interactions. In order to construct a useful protein interaction network from this noisy data, computational methods are applied to validate the protein-protein interactions. In this study, a computational technique to identify signaling pathways from a protein interaction network constructed using validated protein-protein interaction data was designed. A weighted interaction graph of the Saccharomyces cerevisiae (Baker’s Yeast) organism using the proteins as the nodes and interactions between them as edges was constructed. The weights were obtained using Bayesian probabilistic network to estimate the posterior probability of interaction between two proteins given the gene expression measurement as biological evidence. Only interactions above a threshold were accepted for the network model. A pathway was formalized as a simple path in the interaction network from a starting protein and an ending protein of interest. We were able to identify some pathway segments, one of which is a segment of the pathway that signals the start of the process of meiosis in S. cerevisiae.

Keywords: Bayesian networks, protein interaction networks, Saccharomyces cerevisiae, signalling pathways

Procedia PDF Downloads 536
4673 Determinants of the Users Intention of Social-Local-Mobile Applications

Authors: Chia-Chen Chen, Mu-Yen Chen

Abstract:

In recent years, with the vigorous growth of hardware and software technologies of smart mobile devices coupling with the rapid increase of social network influence, mobile commerce also presents the commercial operation mode of the future mainstream. For the time being, SoLoMo has become one of the very popular commercial models, its full name and meaning mainly refer to that users can obtain three key service types through smart mobile devices (Mobile) and omnipresent network services, and then link to the social (Social) web site platform to obtain the information exchange, again collocating with position and situational awareness technology to get the service suitable for the location (Local), through anytime, anywhere and any personal use of different mobile devices to provide the service concept of seamless integration style, and more deriving infinite opportunities of the future. The study tries to explore the use intention of users with SoLoMo mobile application formula, proposing research model to integrate TAM, ISSM, IDT and network externality, and with questionnaires to collect data and analyze results to verify the hypothesis, results show that perceived ease-of-use (PEOU), perceived usefulness (PU), and network externality have significant impact on the use intention with SoLoMo mobile application formula, and the information quality, relative advantages and observability have impacts on the perceived usefulness, and further affecting the use intention.

Keywords: SoLoMo (social, local, and mobile), technology acceptance model, innovation diffusion theory, network externality

Procedia PDF Downloads 524
4672 Assisting Dating of Greek Papyri Images with Deep Learning

Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou

Abstract:

Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.

Keywords: image classification, papyri images, dating

Procedia PDF Downloads 75
4671 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 351
4670 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease

Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez

Abstract:

Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).

Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions

Procedia PDF Downloads 329
4669 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria

Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko

Abstract:

The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.

Keywords: broilers, factor analysis, income, small scale

Procedia PDF Downloads 75
4668 Impacts on Marine Ecosystems Using a Multilayer Network Approach

Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade

Abstract:

Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.

Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management

Procedia PDF Downloads 105
4667 Evolution under Length Constraints for Convolutional Neural Networks Architecture Design

Authors: Ousmane Youme, Jean Marie Dembele, Eugene Ezin, Christophe Cambier

Abstract:

In recent years, the convolutional neural networks (CNN) architectures designed by evolution algorithms have proven to be competitive with handcrafted architectures designed by experts. However, these algorithms need a lot of computational power, which is beyond the capabilities of most researchers and engineers. To overcome this problem, we propose an evolution architecture under length constraints. It consists of two algorithms: a search length strategy to find an optimal space and a search architecture strategy based on a genetic algorithm to find the best individual in the optimal space. Our algorithms drastically reduce resource costs and also keep good performance. On the Cifar-10 dataset, our framework presents outstanding performance with an error rate of 5.12% and only 4.6 GPU a day to converge to the optimal individual -22 GPU a day less than the lowest cost automatic evolutionary algorithm in the peer competition.

Keywords: CNN architecture, genetic algorithm, evolution algorithm, length constraints

Procedia PDF Downloads 124
4666 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 217
4665 Long-Term Conservation Tillage Impact on Soil Properties and Crop Productivity

Authors: Danute Karcauskiene, Dalia Ambrazaitiene, Regina Skuodiene, Monika Vilkiene, Regina Repsiene, Ieva Jokubauskaite

Abstract:

The main ambition for nowadays agriculture is to get the economically effective yield and to secure the soil ecological sustainability. According to the effect on the main soil quality indexes, tillage systems may be separated into two types, conventional and conservation tillage. The goal of this study was to determine the impact of conservation and conventional primary soil tillage methods and soil fertility improvement measures on soil properties and crop productivity. Methods: The soil of the experimental site is Dystric Glossic Retisol (WRB 2014) with texture of sandy loam. The trial was established in 2003 in the experimental field of crop rotation of Vėžaičiai Branch of Lithuanian Research Centre for Agriculture and Forestry. Trial factors and treatments: factor A- primary soil tillage in (autumn): deep ploughing (20-25cm), shallow ploughing (10-12cm), shallow ploughless tillage (8-10cm); factor B – soil fertility improvement measures: plant residues, plant residues + straw, green manure 1st cut + straw, farmyard manure 40tha-1 + straw. The four - course crop rotation consisted of red clover, winter wheat, spring rape and spring barley with undersown. Results: The tillage had no statistically significant effect on topsoil (0-10 cm) pHKCl level, it was 5.5 - 5.7. During all experiment period, the highest soil pHKCl level (5.65) was in the shallow ploughless tillage. The organic fertilizers particularly the biomass of grass and farmyard manure had tendency to increase the soil pHKCl. The content of plant - available phosphorus and potassium significantly increase in the shallow ploughing compared with others tillage systems. The farmyard manure increases those elements in whole arable layer. The dissolved organic carbon concentration was significantly higher in the 0 - 10 cm soil layer in the shallow ploughless tillage compared with deep ploughing. After the incorporation of clover biomass and farmyard manure the concentration of dissolved organic carbon increased in the top soil layer. During all experiment period the largest amount of water stable aggregates was determined in the soil where the shallow ploughless tillage was applied. It was by 12% higher compared with deep ploughing. During all experiment time, the soil moisture was higher in the shallow ploughing and shallow ploughless tillage (9-27%) compared to deep ploughing. The lowest emission of CO2 was determined in the deep ploughing soil. The highest rate of CO2 emission was in shallow ploughless tillage. The addition of organic fertilisers had a tendency to increase the CO2 emission, but there was no statistically significant effect between the different types of organic fertilisers. The crop yield was larger in the deep ploughing soil compared to the shallow and shallow ploughless tillage.

Keywords: reduced tillage, soil structure, soil pH, biological activity, crop productivity

Procedia PDF Downloads 262
4664 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System

Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli

Abstract:

This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.

Keywords: feature selection, genetic algorithm, optimization, wood recognition system

Procedia PDF Downloads 538
4663 Recovery of Fried Soybean Oil Using Bentonite as an Adsorbent: Optimization, Isotherm and Kinetics Studies

Authors: Prakash Kumar Nayak, Avinash Kumar, Uma Dash, Kalpana Rayaguru

Abstract:

Soybean oil is one of the most widely consumed cooking oils, worldwide. Deep-fat frying of foods at higher temperatures adds unique flavour, golden brown colour and crispy texture to foods. But it brings in various changes like hydrolysis, oxidation, hydrogenation and thermal alteration to oil. The presence of Peroxide value (PV) is one of the most important factors affecting the quality of the deep-fat fried oil. Using bentonite as an adsorbent, the PV can be reduced, thereby improving the quality of the soybean oil. In this study, operating parameters like heating time of oil (10, 15, 20, 25 & 30 h), contact time ( 5, 10, 15, 20, 25 h) and concentration of adsorbent (0.25, 0.5, 0.75, 1.0 and 1.25 g/ 100 ml of oil) have been optimized by response surface methodology (RSM) considering percentage reduction of PV as a response. Adsorption data were analysed by fitting with Langmuir and Freundlich isotherm model. The results show that the Langmuir model shows the best fit compared to the Freundlich model. The adsorption process was also found to follow a pseudo-second-order kinetic model.

Keywords: bentonite, Langmuir isotherm, peroxide value, RSM, soybean oil

Procedia PDF Downloads 370
4662 Performance Estimation of Two Port Multiple-Input and Multiple-Output Antenna for Wireless Local Area Network Applications

Authors: Radha Tomar, Satish K. Jain, Manish Panchal, P. S. Rathore

Abstract:

In the presented work, inset fed microstrip patch antenna (IFMPA) based two port MIMO Antenna system has been proposed, which is suitable for wireless local area network (WLAN) applications. IFMPA has been designed, optimized for 2.4 GHz and applied for MIMO formation. The optimized parameters of the proposed IFMPA have been used for fabrication of antenna and two port MIMO in a laboratory. Fabrication of the designed MIMO antenna has been done and tested experimentally for performance parameters like Envelope Correlation Coefficient (ECC), Mean Effective Gain (MEG), Directive Gain (DG), Channel Capacity Loss (CCL), Multiplexing Efficiency (ME) etc and results are compared with simulated parameters extracted with simulated S parameters to validate the results. The simulated and experimentally measured plots and numerical values of these MIMO performance parameters resembles very much with each other. This shows the success of MIMO antenna design methodology.

Keywords: multiple-input and multiple-output, wireless local area network, vector network analyzer, envelope correlation coefficient

Procedia PDF Downloads 49
4661 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence

Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar

Abstract:

This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.

Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves

Procedia PDF Downloads 192