Search results for: MIMO (Multiple Input Multiple Output)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7993

Search results for: MIMO (Multiple Input Multiple Output)

7423 Multiple Etiologies and Incidences of Co-Infections in Childhood Diarrhea in a Hospital Based Screening Study in Odisha, India

Authors: Arpit K. Shrivastava, Nirmal K. Mohakud, Subrat Kumar, Priyadarshi S. Sahu

Abstract:

Acute diarrhea is one of the major causes of morbidity and mortality among children less than five years of age. Multiple etiologies have been implicated for infectious gastroenteritis causing acute diarrhea. In our study fecal samples (n=165) were collected from children (<5 years) presenting with symptoms of acute diarrhea. Samples were screened for viral, bacterial, and parasitic etiologies such as Rotavirus, Adenovirus, Diarrhoeagenic Escherichia coli (EPEC, EHEC, STEC, O157, O111), Shigella spp., Salmonella spp., Vibrio cholera, Cryptosporidium spp., and Giardia spp. The overall results from our study showed that 57% of children below 5 years of age with acute diarrhea were positive for at least one infectious etiology. Diarrhoeagenic Escherichia coli was detected to be the major etiological agent (29.09%) followed by Rotavirus (24.24%), Shigella (21.21%), Adenovirus (5.45%), Cryptosporidium (2.42%), and Giardia (0.60%). Among the different DEC strains, EPEC was detected significantly higher in <2 years children in comparison to >2 years age group (p =0.001). Concurrent infections with two or more pathogens were observed in 47 of 160 (28.48%) cases with a predominant incidence particularly in <2-year-old children (66.66%) compared to children of 2 to 5 years age group. Co-infection of Rotavirus with Shigella was the most frequent combination, which was detected in 17.94% cases, followed by Rotavirus with EPEC (15.38%) and Shigella with STEC (12.82%). Detection of multiple infectious etiologies and diagnosis of the right causative agent(s) can immensely help in better management of acute childhood diarrhea. In future more studies focusing on the detection of cases with concurrent infections must be carried out, as we believe that the etiological agents might be complementing each other’s strategies of pathogenesis resulting in severe diarrhea.

Keywords: children, co-infection, infectious diarrhea, Odisha

Procedia PDF Downloads 329
7422 A Case Comparative Study of Infant Mortality Rate in North-West Nigeria

Authors: G. I. Onwuka, A. Danbaba, S. U. Gulumbe

Abstract:

This study investigated of Infant Mortality Rate as observed at a general hospital in Kaduna-South, Kaduna State, North West Nigeria. The causes of infant Mortality were examined. The data used for this analysis were collected at the statistics unit of the Hospital. The analysis was carried out on the data using Multiple Linear regression Technique and this showed that there is linear relationship between the dependent variable (death) and the independent variables (malaria, measles, anaemia, and coronary heart disease). The resultant model also revealed that a unit increment in each of these diseases would result to a unit increment in death recorded, 98.7% of the total variation in mortality is explained by the given model. The highest number of mortality was recorded in July, 2005 and the lowest mortality recorded in October, 2009.Recommendations were however made based on the results of the study.

Keywords: infant mortality rate, multiple linear regression, diseases, serial correlation

Procedia PDF Downloads 319
7421 The Effect of Six-Weeks of Elastic Exercises with Reactionary Ropes on Nerve Conduction Velocity and Balance in Females with Multiple Sclerosis

Authors: Mostafa Sarabzadeh, Masoumeh Helalizadeh, Seyyed Mahmoud Hejazi

Abstract:

Multiple Sclerosis is considered as diseases related to central nerve system, the chronic and progressive disease impress on sensory and motor function of people. Due to equilibrium problems in this patients that related to disorder of nerve conduction transmission from central nerve system to organs and the nature of elastic bands that can make changes in neuromuscular junctions and momentary actions, the aim of this research is evaluate elastic training effect by reactionary ropes on nerve conduction velocity (in lower and upper limb) and functional balance in female patients with Multiple Sclerosis. The study was a semi-experimental study that was performed based on pre and post-test method, The statistical community consisted of 16 women with MS in the age mean 25-40yrs, at low and intermediate levels of disease EDSS 1-4 (Expanded Disability Status Scale) that were divided randomly into elastic and control groups, so the training program of experimental group lasted six weeks, 3 sessions per week of elastic exercises with reactionary ropes. Electroneurography parameters (nerve conduction velocity- latency) of Upper and lower nerves (Median, Tibial, Sural, Peroneal) along with balance were investigated respectively by the Electroneurography system (ENG) and Timed up and go (TUG) functional test two times in before and after the training period. After that, To analyze the data were used of Dependent and Independent T-test (with sig level p<0.05). The results showed significant increase in nerve conduction velocity of Sural (p=0.001), Peroneal (p=0.01), Median (p=0.03) except Tibial and also development Latency Time of Tibial (p= 0), Peroneal (p=0), Median (p=0) except Sural. The TUG test showed significant decreases in execution time too (p=0.001). Generally, based on what the obtained data can indicate, modern training with elastic bands can contribute to enhanced nerve conduction velocity and balance in neurosis patients (MS) so lead to reduce problems, promotion of mobility and finally more life expectancy in these patients.

Keywords: balance, elastic bands, multiple sclerosis, nerve conduction, velocity

Procedia PDF Downloads 212
7420 Development of a Decision Model to Optimize Total Cost in Food Supply Chain

Authors: Henry Lau, Dilupa Nakandala, Li Zhao

Abstract:

All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.

Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation

Procedia PDF Downloads 213
7419 SME Credit Financing, Financial Development and Economic Growth: A VAR Approach to the Nigerian Economy

Authors: A. Bolaji Adesoye, Alimi Olorunfemi

Abstract:

This paper examines the impact of small and medium-scale enterprises (SMEs) credit financing and financial market development and their shocks on the output growth of Nigeria. The study estimated a VAR model for Nigeria using 1970-2013 annual data series. Unit root tests and cointegration are carried out. The study also explores IRFs and FEVDs in a system that includes output, commercial bank loan to SMEs, domestic credit to private sector by banks, money supply, lending rate and investment. Findings suggest that shocks in commercial bank credit to SMEs has a major impact on the output changes of Nigeria. Money supply shocks also have a sizeable impact on output growth variations amidst other financial instruments. Lastly, neutrality of investment does not hold in Nigeria as it also has impact on output fluctuations.

Keywords: SMEs financing, financial development, investment, output, Nigeria

Procedia PDF Downloads 402
7418 Geared Turbofan with Water Alcohol Technology

Authors: Abhinav Purohit, Shruthi S. Pradeep

Abstract:

In today’s world, aviation industries are using turbofan engines (permutation of turboprop and turbojet) which meet the obligatory requirements to be fuel competent and to produce enough thrust to propel an aircraft. But one can imagine increasing the work output of this particular machine by reducing the input power. In striving to improve technologies, especially to augment the efficiency of the engine with some adaptations, which can be crooked to new concepts by introducing a step change in the turbofan engine development. One hopeful concept is, to de-couple the fan with the help of reduction gear box in a two spool shaft engine from the rest of the machinery to get more work output with maximum efficiency by reducing the load on the turbine shaft. By adapting this configuration we can get an additional degree of freedom to better optimize each component at different speeds. Since the components are running at different speeds we can get hold of preferable efficiency. Introducing water alcohol mixture to this concept would really help to get better results.

Keywords: emissions, fuel consumption, more power, turbofan

Procedia PDF Downloads 431
7417 Analyzing Business Model Choices and Sustainable Value Capturing: A Multiple Case Study of Sharing Economy Business Models

Authors: Minttu Laukkanen, Janne Huiskonen

Abstract:

This study investigates the sharing economy business models as examples of the sustainable business models. The aim is to contribute to the limited literature on sharing economy in connection with sustainable business models by explaining sharing economy business models value capturing. Specifically, this research answers the following question: How business model choices affect captured sustainable value? A multiple case study approach is applied in this study. Twenty different successful sharing economy business models focusing on consumer business and covering four main areas, accommodation, mobility, food, and consumer goods, are selected for analysis. The secondary data available on companies’ websites, previous research, reports, and other public documents are used. All twenty cases are analyzed through the sharing economy business model framework and sustainable value analysis framework using qualitative data analysis. This study represents general sharing economy business model value attributes and their specifications, i.e. sustainable value propositions for different stakeholders, and further explains the sustainability impacts of different sharing economy business models through captured and uncaptured value. In conclusion, this study represents how business model choices affect sustainable value capturing through eight business model attributes identified in this study. This paper contributes to the research on sustainable business models and sharing economy by examining how business model choices affect captured sustainable value. This study highlights the importance of careful business model and sustainability impacts analyses including the triple bottom line, multiple stakeholders and value captured and uncaptured perspectives as well as sustainability trade-offs. It is not self-evident that sharing economy business models advance sustainability, and business model choices does matter.

Keywords: sharing economy, sustainable business model innovation, sustainable value, value capturing

Procedia PDF Downloads 167
7416 Multithreading/Multiprocessing Simulation of The International Space Station Multibody System Using A Divide and Conquer Dynamics Formulation with Flexible Bodies

Authors: Luong A. Nguyen, Elihu Deneke, Thomas L. Harman

Abstract:

This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm is an extension of Featherstone’s divide and conquer approach to include the flexible-body dynamics formulation. The equations of motion, configured for the International Space Station (ISS) with its robotic manipulator arm as a system of articulated flexible bodies, are implemented in separate computer processors. The performance of this divide-and-conquer algorithm implementation in multiple processors is compared with an existing method implemented on a single processor.

Keywords: multibody dynamics, multiple processors, multithreading, divide-and-conquer algorithm, computational efficiency, flexible body dynamics

Procedia PDF Downloads 325
7415 Geopolitical Architecture: The Strategic Complex in Indo Pacific Region

Authors: Muzammil Dar

Abstract:

The confluence of trans-national interests and divergent approaches followed by multiple actors has surrounded the Indo-Pacific region with myriad of strategic complexes- Geo-Political, Geo-economic, and security. This paper has thus made a humble attempt to understand the Indo-Pacific strategic predicament from Asia-Pacific perspective. The portmanteau of Indo-Pacific strategic gamble has multiple actors from global powers to regional actors. On the indo-pacific waters, not only flow trade relations, but the tides of conflicts and controversies are striking these actors against each other. The alliance formation and infrastructure building has built-in threat perceptions from rivals vice-versa. The assertiveness of China as a reality and India’s ideological doctrine of peace and friendship, as well as American rebalancing against China, could be seen as clear and bright on the Indo-Pacific strategic portmanteau. ASEAN and Japan, too, have oscillating posturing in the strategic dilemma. The aim and objective of the paper are to sketch out the prospectus and prejudices of Indo-pacific strategic complex.

Keywords: Indo Pacific, Asia Pacific, security and growth for all in the region, SAGAR, ASEAN China

Procedia PDF Downloads 142
7414 Development of an Appropriate Method for the Determination of Multiple Mycotoxins in Pork Processing Products by UHPLC-TCFLD

Authors: Jason Gica, Yi-Hsieng Samuel Wu, Deng-Jye Yang, Yi-Chen Chen

Abstract:

Mycotoxins, harmful secondary metabolites produced by certain fungi species, pose significant risks to animals and humans worldwide. Their stable properties lead to contamination during grain harvesting, transportation, and storage, as well as in processed food products. The prevalence of mycotoxin contamination has attracted significant attention due to its adverse impact on food safety and global trade. The secondary contamination pathway from animal products has been identified as an important route of exposure, posing health risks for livestock and humans consuming contaminated products. Pork, one of the highly consumed meat products in Taiwan according to the National Food Consumption Database, plays a critical role in the nation's diet and economy. Given its substantial consumption, pork processing products are a significant component of the food supply chain and a potential source of mycotoxin contamination. This study is paramount for formulating effective regulations and strategies to mitigate mycotoxin-related risks in the food supply chain. By establishing a reliable analytical method, this research contributes to safeguarding public health and enhancing the quality of pork processing products. The findings will serve as valuable guidance for policymakers, food industries, and consumers to ensure a safer food supply chain in the face of emerging mycotoxin challenges. An innovative and efficient analytical approach is proposed using Ultra-High Performance Liquid Chromatography coupled with Temperature Control Fluorescence Detector Light (UHPLC-TCFLD) to determine multiple mycotoxins in pork meat samples due to its exceptional capacity to detect multiple mycotoxins at the lowest levels of concentration, making it highly sensitive and reliable for comprehensive mycotoxin analysis. Additionally, its ability to simultaneously detect multiple mycotoxins in a single run significantly reduces the time and resources required for analysis, making it a cost-effective solution for monitoring mycotoxin contamination in pork processing products. The research aims to optimize the efficient mycotoxin QuEChERs extraction method and rigorously validate its accuracy and precision. The results will provide crucial insights into mycotoxin levels in pork processing products.

Keywords: multiple-mycotoxin analysis, pork processing products, QuEChERs, UHPLC-TCFLD, validation

Procedia PDF Downloads 58
7413 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 110
7412 Grating Scale Thermal Expansion Error Compensation for Large Machine Tools Based on Multiple Temperature Detection

Authors: Wenlong Feng, Zhenchun Du, Jianguo Yang

Abstract:

To decrease the grating scale thermal expansion error, a novel method which based on multiple temperature detections is proposed. Several temperature sensors are installed on the grating scale and the temperatures of these sensors are recorded. The temperatures of every point on the grating scale are calculated by interpolating between adjacent sensors. According to the thermal expansion principle, the grating scale thermal expansion error model can be established by doing the integral for the variations of position and temperature. A novel compensation method is proposed in this paper. By applying the established error model, the grating scale thermal expansion error is decreased by 90% compared with no compensation. The residual positioning error of the grating scale is less than 15um/10m and the accuracy of the machine tool is significant improved.

Keywords: thermal expansion error of grating scale, error compensation, machine tools, integral method

Procedia PDF Downloads 360
7411 Life in Bequia in the Era of Climate Change: Societal Perception of Adaptation and Vulnerability

Authors: Sherry Ann Ganase, Sandra Sookram

Abstract:

This study examines adaptation measures and factors that influence adaptation decisions in Bequia by using multiple linear regression and a structural equation model. Using survey data, the results suggest that households are knowledgeable and concerned about climate change but lack knowledge about the measures needed to adapt. The findings from the SEM suggest that a positive relationship exist between vulnerability and adaptation, vulnerability and perception, along with a negative relationship between perception and adaptation. This suggests that being aware of the terms associated with climate change and knowledge about climate change is insufficient for implementing adaptation measures; instead the risk and importance placed on climate change, vulnerability experienced with household flooding, drainage and expected threat of future sea level are the main factors that influence the adaptation decision. The results obtained in this study are beneficial to all as adaptation requires a collective effort by stakeholders.

Keywords: adaptation, Bequia, multiple linear regression, structural equation model

Procedia PDF Downloads 454
7410 Order Picking Problem: An Exact and Heuristic Algorithms for the Generalized Travelling Salesman Problem With Geographical Overlap Between Clusters

Authors: Farzaneh Rajabighamchi, Stan van Hoesel, Christof Defryn

Abstract:

The generalized traveling salesman problem (GTSP) is an extension of the traveling salesman problem (TSP) where the set of nodes is partitioned into clusters, and the salesman must visit exactly one node per cluster. In this research, we apply the definition of the GTSP to an order picker routing problem with multiple locations per product. As such, each product represents a cluster and its corresponding nodes are the locations at which the product can be retrieved. To pick a certain product item from the warehouse, the picker needs to visit one of these locations during its pick tour. As all products are scattered throughout the warehouse, the product clusters not separated geographically. We propose an exact LP model as well as heuristic and meta-heuristic solution algorithms for the order picking problem with multiple product locations.

Keywords: warehouse optimization, order picking problem, generalised travelling salesman problem, heuristic algorithm

Procedia PDF Downloads 105
7409 Innovation Trends in Latin America Countries

Authors: José Carlos Rodríguez, Mario Gómez

Abstract:

This paper analyses innovation trends in Latin America countries by means of the number of patent applications filed by residents and non-residents during the period 1965 to 2012. Making use of patent data released by the World Intellectual Property Organization (WIPO), we search for the presence of multiple structural changes in patent application series in Argentina, Brazil Chile, and Mexico. These changes may suggest that firms’ innovative activity has been modified as a result of implementing a particular science, technology and innovation (STI) policy. Accordingly, the new regulations implemented in these countries during 1980s and 1990s have influenced their intellectual property regimes. The question conducting this research is thus how STI policies in these countries have affected their innovation activity? The results achieved in this research confirm the existence of multiple structural changes in the series of patent applications resulting from STI policies implemented in these countries.

Keywords: econometric methods, innovation activity, Latin America countries, patents, science, technology and innovation policy

Procedia PDF Downloads 275
7408 Exploring the Underlying Factors of Student Dropout in Makawanpur Multiple Campus: A Comprehensive Analysis

Authors: Uttam Aryal, Shekhar Thapaliya

Abstract:

This research paper presents a comprehensive analysis of the factors contributing to student dropout at Makawanpur Multiple Campus, utilizing primary data collected directly from dropped out as well as regular students and academic staff. Employing a mixed-method approach, combining qualitative and quantitative methods, this study examines into the complicated issue of student dropout. Data collection methods included surveys, interviews, and a thorough examination of academic records covering multiple academic years. The study focused on students who left their programs prematurely, as well as current students and academic staff, providing a well-rounded perspective on the issue. The analysis reveals a shaded understanding of the factors influencing student dropout, encompassing both academic and non-academic dimensions. These factors include academic challenges, personal choices, socioeconomic barriers, peer influences, and institutional-related issues. Importantly, the study highlights the most influential factors for dropout, such as the pursuit of education abroad, financial restrictions, and employment opportunities, shedding light on the complex web of circumstances that lead students to discontinue their education. The insights derived from this study offer actionable recommendations for campus administrators, policymakers, and educators to develop targeted interventions aimed at reducing dropout rates and improving student retention. The study underscores the importance of addressing the diverse needs and challenges faced by students, with the ultimate goal of fostering a supportive academic environment that encourages student success and program completion.

Keywords: drop out, students, factors, opportunities, challenges

Procedia PDF Downloads 59
7407 Cellular Architecture of Future Wireless Communication Networks

Authors: Mohammad Yahaghifar

Abstract:

Nowadays Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications. Evolving future communication network generation cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user coverage in hot-spots and crowded areas with lower latency,energy consumption and cost per information transfer. In this paper we propose a potential cellular architecture that separates indoor and outdoor scenarios and discuss various promising technologies for future wireless communication systemssystems, such as massive MIMO, energy-efficient communications,cognitive radio networks, and visible light communications and we disscuse about 5G that is next generation of wireless networks.

Keywords: future challenges in networks, cellur architecture, visible light communication, 5G wireless technologies, spatial modulation, massiva mimo, cognitive radio network, green communications

Procedia PDF Downloads 479
7406 Automatic Censoring in K-Distribution for Multiple Targets Situations

Authors: Naime Boudemagh, Zoheir Hammoudi

Abstract:

The parameters estimation of the K-distribution is an essential part in radar detection. In fact, presence of interfering targets in reference cells causes a decrease in detection performances. In such situation, the estimate of the shape and the scale parameters are far from the actual values. In the order to avoid interfering targets, we propose an Automatic Censoring (AC) algorithm of radar interfering targets in K-distribution. The censoring technique used in this work offers a good discrimination between homogeneous and non-homogeneous environments. The homogeneous population is then used to estimate the unknown parameters by the classical Method of Moment (MOM). The AC algorithm does not need any prior information about the clutter parameters nor does it require both the number and the position of interfering targets. The accuracy of the estimation parameters obtained by this algorithm are validated and compared to various actual values of the shape parameter, using Monte Carlo simulations, this latter show that the probability of censing in multiple target situations are in good agreement.

Keywords: parameters estimation, method of moments, automatic censoring, K distribution

Procedia PDF Downloads 369
7405 Extending Image Captioning to Video Captioning Using Encoder-Decoder

Authors: Sikiru Ademola Adewale, Joe Thomas, Bolanle Hafiz Matti, Tosin Ige

Abstract:

This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.

Keywords: decoder, encoder, many-to-many mapping, video captioning, 2-gram BLEU

Procedia PDF Downloads 93
7404 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 454
7403 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 81
7402 Using Self Organizing Feature Maps for Classification in RGB Images

Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami

Abstract:

Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.

Keywords: classification, SOFM algorithm, neural network, neighborhood, RGB image

Procedia PDF Downloads 470
7401 Research on the Public Governance of Urban Public Green Spaces from the Perspective of Institutional Economics

Authors: Zhang Xue

Abstract:

Urban public green spaces have evolved from classical private gardens and have expanded into multi-dimensional space value attributes such as scale and property rights. Among them, ecological, environmental value, social interaction value, and commercial, economic value have become consensual value characteristics. From the perspective of institutional economics, urban public green spaces, as a type of non-exclusive and non-competitive public good, express the social connotation of spatial "publicness" and multiple values are its important attributes. However, due to the positive externality characteristics of public green spaces, the cost-benefit functions between subjects are inconsistent, leading to issues such as the "anti-commons tragedy" of transitional management, lack of public sense of space responsibility, and weakened public nature. It is necessary to enhance the "publicness" of urban public green spaces through effective institutional arrangements, inclusive planning participation, and humane management measures, promoting urban public openness and the enhancement of multiple values.

Keywords: public green spaces, publicness, governance, institutional economics

Procedia PDF Downloads 48
7400 Analytic Hierarchy Process for the Container Terminal Choice from Multiple Terminals within the Port of Colombo

Authors: G. M. B. P. Abeysekara, W. A. D. C. Wijerathna

Abstract:

Terminal choice from the multiple terminals region is not a simple decision and it is very complex, because shipping lines should consider on influential factors for the terminal choice at once according to their requirement. Therefore, terminal choice is a multiple criterion decision making (MCDM) situation under a specially designed decision hierarchy. Identification of perspective of shipping lines regarding terminal choice is vital important for the decision makers regarding container terminals. Thus this study is evaluated perception on main and feeder shipping lines’ regarding port of Colombo container terminals, and ranked terminals according to shipping lines preference. Analytic Hierarchy Process (AHP) model is adapted to this study, since it has features similar to the MCDM, it is weighted every influential factor by using pair wise comparisons, and consistency of the decision makers’ judgments are checked to evaluate trustworthiness of gathered data. And rating method is used to rank the terminals within Port of Colombo by assigning particular preference values with respect to the criteria and sub criteria. According to the findings of this study, main lines’ mainly concern on water depth of approach channel, depth of berth, handling charges and handling equipment facilities. And feeder lines’ main concerns were handling equipment facilities, loading and discharging efficiency, depth of berth and handling charges. Findings of the study suggested concentrating regarding the emphasized areas in order to enhance the competitiveness of terminals, and to increase number of vessel callings at the Port of Colombo. Application of above finding of the terminals within Port of Colombo lead to a far better competition among terminals and would uplift the overall level of services.

Keywords: AHP, Main and feeder shipping lines, criteria, sub criteria

Procedia PDF Downloads 417
7399 Large Amplitude Vibration of Sandwich Beam

Authors: Youssef Abdelli, Rachid Nasri

Abstract:

The large amplitude free vibration analysis of three-layered symmetric sandwich beams is carried out using two different approaches. The governing nonlinear partial differential equations of motion in free natural vibration are derived using Hamilton's principle. The formulation leads to two nonlinear partial differential equations that are coupled both in axial and binding deformations. In the first approach, the method of multiple scales is applied directly to the governing equation that is a nonlinear partial differential equation. In the second approach, we discretize the governing equation by using Galerkin's procedure and then apply the shooting method to the obtained ordinary differential equations. In order to check the validity of the solutions obtained by the two approaches, they are compared with the solutions obtained by two approaches; they are compared with the solutions obtained numerically by the finite difference method.

Keywords: finite difference method, large amplitude vibration, multiple scales, nonlinear vibration

Procedia PDF Downloads 454
7398 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks

Authors: Paul Shize Li, Frank Alber

Abstract:

A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.

Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation

Procedia PDF Downloads 153
7397 Using AI Based Software as an Assessment Aid for University Engineering Assignments

Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth

Abstract:

As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.

Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)

Procedia PDF Downloads 118
7396 Multiple Images Stitching Based on Gradually Changing Matrix

Authors: Shangdong Zhu, Yunzhou Zhang, Jie Zhang, Hang Hu, Yazhou Zhang

Abstract:

Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.

Keywords: image stitching, gradually changing matrix, horizontal direction, matrix approximation, homography matrix

Procedia PDF Downloads 307
7395 The Effect of Macroeconomic Policies on Cambodia's Economy: ARDL and VECM Model

Authors: Siphat Lim

Abstract:

This study used Autoregressive Distributed Lag (ARDL) approach to cointegration. In the long-run the general price level and exchange rate have a positively significant effect on domestic output. The estimated result further revealed that fiscal stimulus help stimulate domestic output in the long-run, but not in the short-run, while monetary expansion help to stimulate output in both short-run and long-run. The result is complied with the theory which is the macroeconomic policies, fiscal and monetary policy; help to stimulate domestic output in the long-run. The estimated result of the Vector Error Correction Model (VECM) has indicated more clearly that the consumer price index has a positive effect on output with highly statistically significant. Increasing in the general price level would increase the competitiveness among producers than increase in the output. However, the exchange rate also has a positive effect and highly significant on the gross domestic product. The exchange rate depreciation might increase export since the purchasing power of foreigners has increased. More importantly, fiscal stimulus would help stimulate the domestic output in the long-run since the coefficient of government expenditure is positive. In addition, monetary expansion would also help stimulate the output and the result is highly significant. Thus, fiscal stimulus and monetary expansionary would help stimulate the domestic output in the long-run in Cambodia.

Keywords: fiscal policy, monetary policy, ARDL, VECM

Procedia PDF Downloads 426
7394 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques

Authors: Songul Cinaroglu

Abstract:

Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.

Keywords: public hospital unions, efficiency, data envelopment analysis, random forest

Procedia PDF Downloads 121