Search results for: Lee metric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 290

Search results for: Lee metric

110 Systematic Examination of Methods Supporting the Social Innovation Process

Authors: Mariann Veresne Somosi, Zoltan Nagy, Krisztina Varga

Abstract:

Innovation is the key element of economic development and a key factor in social processes. Technical innovations can be identified as prerequisites and causes of social change and cannot be created without the renewal of society. The study of social innovation can be characterised as one of the significant research areas of our day. The study’s aim is to identify the process of social innovation, which can be defined by input, transformation, and output factors. This approach divides the social innovation process into three parts: situation analysis, implementation, follow-up. The methods associated with each stage of the process are illustrated by the chronological line of social innovation. In this study, we have sought to present methodologies that support long- and short-term decision-making that is easy to apply, have different complementary content, and are well visualised for different user groups. When applying the methods, the reference objects are different: county, district, settlement, specific organisation. The solution proposed by the study supports the development of a methodological combination adapted to different situations. Having reviewed metric and conceptualisation issues, we wanted to develop a methodological combination along with a change management logic suitable for structured support to the generation of social innovation in the case of a locality or a specific organisation. In addition to a theoretical summary, in the second part of the study, we want to give a non-exhaustive picture of the two counties located in the north-eastern part of Hungary through specific analyses and case descriptions.

Keywords: factors of social innovation, methodological combination, social innovation process, supporting decision-making

Procedia PDF Downloads 126
109 Synthesis of Amine Functionalized MOF-74 for Carbon Dioxide Capture

Authors: Ghulam Murshid, Samil Ullah

Abstract:

Scientific studies suggested that the incremented greenhouse gas concentration in the atmosphere, particularly of carbon dioxide (CO2) is one of the major factors in global warming. The concentration of CO2 in our climate has crossed the milestone level of 400 parts per million (ppm) hence breaking the record of human history. A report by 49 researchers from 10 countries said, 'Global CO2 emissions from burning fossil fuels will rise to a record 36 billion metric tons (39.683 billion tons) this year.' Main contributors of CO2 in to the atmosphere are usage of fossil fuel, transportation sector and power generation plants. Among all available technologies, which include; absorption via chemicals, membrane separation, cryogenic and adsorption are in practice around the globe. Adsorption of CO2 using metal organic frameworks (MOF) is getting interest of researcher around the globe. In the current work, MOF-74 as well as modified MOF-74 with a sterically hindered amine (AMP) was synthesized and characterized. The modification was carried out using a sterically hindered amine in order to study the effect on its adsorption capacity. Resulting samples were characterized by using Fourier Transform Infrared Spectroscopy (FTIR), Field Emission Scanning Electron Microscope (FESEM), Thermal Gravimetric Analyser (TGA) and Brunauer-Emmett-Teller (BET). The FTIR results clearly confirmed the formation of MOF-74 structure and the presence of AMP. FESEM and TEM revealed the topography and morphology of the both MOF-74 and amine modified MOF. BET isotherm result shows that due to the addition of AMP in to the structure, significant enhancement of CO2 adsorption was observed.

Keywords: adsorbents, amine, CO2, global warming

Procedia PDF Downloads 389
108 Spino-Pelvic Alignment with SpineCor Brace Use in Adolescent Idiopathic Scoliosis

Authors: Reham H. Diab, Amira A. A. Abdallah, Eman A. Embaby

Abstract:

Background: The effectiveness of bracing on preventing spino-pelvic alignment deterioration in idiopathic scoliosis has been extensively studied especially in the frontal plane. Yet, there is lack of knowledge regarding the effect of soft braces on spino-pelvic alignment in the sagittal plane. Achieving harmonious sagittal plane spino-pelvic balance is critical for the preservation of physiologic posture and spinal health. Purpose: This study examined the kyphotic angle, lordotic angle and pelvic inclination in the sagittal plane and trunk imbalance in the frontal plane before and after a six-month rehabilitation period. Methods: Nineteen patients with idiopathic scoliosis participated in the study. They were divided into two groups; experimental and control. The experimental group (group I) used the SpineCor brace in addition to a rehabilitation exercise program while the control group (group II) had the exercise program only. The mean ±SD age, weight and height were 16.89±2.15 vs. 15.3±2.5 years; 59.78±6.85 vs. 62.5±8.33 Kg and 162.78±5.76 vs. 159±5.72 cm for group I vs. group II. Data were collected using for metric Π system. Results: Mixed design MANOVA showed that there were significant (p < 0.05) decreases in all the tested variables after the six-month period compared with “before” in both groups. Moreover, there was a significant decrease in the kyphotic angle in group I compared with group II after the six-month period. Interpretation and conclusion: SpineCor brace is beneficial in reducing spino-pelvic alignment deterioration in both sagittal and frontal planes.

Keywords: adolescent idiopathic scoliosis, SpineCor, spino-pelvic alignment, biomechanics

Procedia PDF Downloads 301
107 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 75
106 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 202
105 Improving Similarity Search Using Clustered Data

Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong

Abstract:

This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.

Keywords: visual search, deep learning, convolutional neural network, machine learning

Procedia PDF Downloads 183
104 Back to Basics: Redefining Quality Measurement for Hybrid Software Development Organizations

Authors: Satya Pradhan, Venky Nanniyur

Abstract:

As the software industry transitions from a license-based model to a subscription-based Software-as-a-Service (SaaS) model, many software development groups are using a hybrid development model that incorporates Agile and Waterfall methodologies in different parts of the organization. The traditional metrics used for measuring software quality in Waterfall or Agile paradigms do not apply to this new hybrid methodology. In addition, to respond to higher quality demands from customers and to gain a competitive advantage in the market, many companies are starting to prioritize quality as a strategic differentiator. As a result, quality metrics are included in the decision-making activities all the way up to the executive level, including board of director reviews. This paper presents key challenges associated with measuring software quality in organizations using the hybrid development model. We introduce a framework called Prevention-Inspection-Evaluation-Removal (PIER) to provide a comprehensive metric definition for hybrid organizations. The framework includes quality measurements, quality enforcement, and quality decision points at different organizational levels and project milestones. The metrics framework defined in this paper is being used for all Cisco systems products used in customer premises. We present several field metrics for one product portfolio (enterprise networking) to show the effectiveness of the proposed measurement system. As the results show, this metrics framework has significantly improved in-process defect management as well as field quality.

Keywords: quality management system, quality metrics framework, quality metrics, agile, waterfall, hybrid development system

Procedia PDF Downloads 140
103 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 28
102 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution

Authors: Niklas Bondesson

Abstract:

Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.

Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour

Procedia PDF Downloads 380
101 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 96
100 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution

Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras

Abstract:

Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.

Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions

Procedia PDF Downloads 373
99 Description of Anthracotheriidae Remains from the Middle and Upper Siwaliks of Punjab, Pakistan

Authors: Abdul M. Khan, Ayesha Iqbal

Abstract:

In this paper, new dental remains of Merycopotamus (Anthracotheriidae) are described. The specimens were collected during field work by the authors from the well dated fossiliferous locality 'Hasnot' belonging to the Dhok Pathan Formation, and from 'Tatrot' village belonging to Tatrot Formation of the Potwar Plateau, Pakistan. The stratigraphic age of the Neogene deposits around Hasnot is 7 - 5 Ma; whereas the age of the Tatrot Formation is from 3.4 - 2.6 Ma. The newly discovered material when compared with the previous records of the genus Merycopotamus from the Siwaliks led us to identify all the three reported species of this genus from the Siwaliks of Pakistan. As the sample comprises only the dental remains so the identification of the specimens is solely based upon the morpho-metric analysis. The occlusal pattern of the upper molar in Merycopotamus dissimilis is different from Merycopotamus medioximus and Merycopotamus nanus in having a mesostyle fully divided, forming two prominent cusps, while mesostyle in M. medioximus is partly divided and small lateral crests are present on the mesostyle. A continuous loop like mesostyle is present in Merycopotamus nanus. The entoconid fold is present in Merycopotamus dissimilis on the lower molars whereas it is absent in Merycopotamus medioximus and Merycopotamus nanus. The hypoconulid in M. dissimilis is relatively simple but a loop like hypoconulid is present in M. medioximus and M. nanus. The results of the present findings are in line with the previous records of the genus Merycopotamus, with M. nanus, M. medioximus and M. dissimilis in the Late Miocene – Early Pliocene Dhok Pathan Formation, and M. dissimilis in the Late Pliocene Tatrot sediments of Pakistan.

Keywords: Dhok Pathan, late miocene, merycopotamus, pliocene, Tatrot

Procedia PDF Downloads 207
98 Quantifying Meaning in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: meaning, information, Lyapunov, living systems

Procedia PDF Downloads 105
97 A Comparative Study Mechanical Properties of Polytetrafluoroethylene Materials Synthesized by Non-Conventional and Conventional Techniques

Authors: H. Lahlali F. El Haouzi, A.M.Al-Baradi, I. El Aboudi, M. El Azhari, A. Mdarhri

Abstract:

Polytetrafluoroethylene (PTFE) is a high performance thermoplastic polymer with exceptional physical and chemical properties, such as a high melting temperature, high thermal stability, and very good chemical resistance. Nevertheless, manufacturing PTFE is problematic due to its high melt viscosity (10 12 Pa.s). In practice, it is by now well established that this property presents a serious problem when the classical methods are used to synthesized the dense PTFE materials in particularly hot pressing, high temperature extrusion. In this framework, we use here a new process namely spark plasma sintering (SPS) to elaborate PTFE samples from the micro metric particles powder. It consists in applying simultaneous electric current and pressure directly on the sample powder. By controlling the processing parameters of this technique, a series of PTFE samples are easy obtained and associated to remarkably short time as is reported in an early work. Our central goal in the present study is to understand how the non conventional SPS affects the mechanical properties at room temperature. For this end, a second commercially series of PTFE synthesized by using the extrusion method is investigated. The first data according to the tensile mechanical properties are found to be superior for the first set samples (SPS). However, this trend is not observed for the results obtained from the compression testing. The observed macro-behaviors are correlated to some physical properties of the two series of samples such as their crystallinity or density. Upon a close examination of these properties, we believe the SPS technique can be seen as a promising way to elaborate the polymer having high molecular mass without compromising their mechanical properties.

Keywords: PTFE, extrusion, Spark Plasma Sintering, physical properties, mechanical behavior

Procedia PDF Downloads 272
96 Implementing a Strategy of Reliability Centred Maintenance (RCM) in the Libyan Cement Industry

Authors: Khalid M. Albarkoly, Kenneth S. Park

Abstract:

The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. Statistics indicate that the demand for cement rose from 1.6 billion metric tons (bmt) in 2000 to 4bmt in 2013. This means that the reliability of a production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity, they cannot produce more than 50% of their designed capacity. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition, it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The paper shows the importance of considering the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could reduce the level of reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organisations which are still experiencing problems in maintenance practices.

Keywords: Libyan cement industry, reliability centred maintenance, maintenance, production, reliability

Procedia PDF Downloads 353
95 Barriers to Public Innovation in Colombia: Case Study in Central Administrative Region

Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia

Abstract:

Public innovation has gained strength in recent years in response to the need to find new strategies or mechanisms to interact between government entities and citizens. In this way, the Colombian government has been promoting policies aimed at strengthening innovation as a fundamental aspect in the work of public entities. However, in order to potentiate the capacities of public servants and therefore of the institutions and organizations to which they belong, it is necessary to be able to understand the context under which they operate in their daily work. This article aims to compile the work developed by the laboratory of innovation, creativity, and new technologies LAB101 of the National University of Colombia for the National Department of Planning. A case study was developed in the central region of Colombia made up of five departments, through the construction of instruments based on quantitative techniques in response to the item combined with qualitative analysis through semi-structured interviews to understand the perception of possible barriers to innovation and the obstacles that have prevented the acceleration of transformation within public organizations. From the information collected, different analyzes are carried out that allows a more robust explanation to be given to the results obtained, and a set of categories are established to group different characteristics associated with possible difficulties that officials perceive to innovate and that are later conceived as barriers. Finally, a proposal for an indicator was built to measure the degree of innovation within public entities in order to be able to carry a metric in future opportunities. The main findings of this study show three key components to be strengthened in public entities and organizations: governance, knowledge management, and the promotion of collaborative workspaces.

Keywords: barriers, enablers, management, public innovation

Procedia PDF Downloads 79
94 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System

Authors: Umar Salisu

Abstract:

This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.

Keywords: drying, microcontroller, optimum controller, PID controller

Procedia PDF Downloads 265
93 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling

Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche

Abstract:

High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.

Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations

Procedia PDF Downloads 404
92 Environmental Protection by Optimum Utilization of Car Air Conditioners

Authors: Sanchita Abrol, Kunal Rana, Ankit Dhir, S. K. Gupta

Abstract:

According to N.R.E.L.’s findings, 700 crore gallons of petrol is used annually to run the air conditioners of passenger vehicles (nearly 6% of total fuel consumption in the USA). Beyond fuel use, the Environmental Protection Agency reported that refrigerant leaks from auto air conditioning units add an additional 5 crore metric tons of carbon emissions to the atmosphere each year. The objective of our project is to deal with this vital issue by carefully modifying the interiors of a car thereby increasing its mileage and the efficiency of its engine. This would consequently result in a decrease in tail emission and generated pollution along with improved car performance. An automatic mechanism, deployed between the front and the rear seats, consisting of transparent thermal insulating sheet/curtain, would roll down as per the requirement of the driver in order to optimize the volume for effective air conditioning, when travelling alone or with a person. The reduction in effective volume will yield favourable results. Even on a mild sunny day, the temperature inside a parked car can quickly spike to life-threatening levels. For a stationary parked car, insulation would be provided beneath its metal body so as to reduce the rate of heat transfer and increase the transmissivity. As a result, the car would not require a large amount of air conditioning for maintaining lower temperature, which would provide us similar benefits. Authors established the feasibility studies, system engineering and primarily theoretical and experimental results confirming the idea and motivation to fabricate and test the actual product.

Keywords: automation, car, cooling insulating curtains, heat optimization, insulation, reduction in tail emission, mileage

Procedia PDF Downloads 247
91 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 300
90 An In-Situ Integrated Micromachining System for Intricate Micro-Parts Machining

Authors: Shun-Tong Chen, Wei-Ping Huang, Hong-Ye Yang, Ming-Chieh Yeh, Chih-Wei Du

Abstract:

This study presents a novel versatile high-precision integrated micromachining system that combines contact and non-contact micromachining techniques to machine intricate micro-parts precisely. Two broad methods of micro fabrication-1) volume additive (micro co-deposition), and 2) volume subtractive (nanometric flycutting, ultrafine w-EDM (wire Electrical Discharge Machining), and micro honing) - are integrated in the developed micromachining system, and their effectiveness is verified. A multidirectional headstock that supports various machining orientations is designed to evaluate the feasibility of multifunctional micromachining. An exchangeable working-tank that allows for various machining mechanisms is also incorporated into the system. Hence, the micro tool and workpiece need not be unloaded or repositioned until all the planned tasks have been completed. By using the designed servo rotary mechanism, a nanometric flycutting approach with a concentric rotary accuracy of 5-nm is constructed and utilized with the system to machine a diffraction-grating element with a nano-metric scale V-groove array. To improve the wear resistance of the micro tool, the micro co-deposition function is used to provide a micro-abrasive coating by an electrochemical method. The construction of ultrafine w-EDM facilitates the fabrication of micro slots with a width of less than 20-µm on a hardened tool. The hardened tool can thus be employed as a micro honing-tool to hone a micro hole with an internal diameter of 200 µm on SKD-11 molded steel. Experimental results prove that intricate micro-parts can be in-situ manufactured with high-precision by the developed integrated micromachining system.

Keywords: integrated micromachining system, in-situ micromachining, nanometric flycutting, ultrafine w-EDM, micro honing

Procedia PDF Downloads 378
89 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning

Authors: Nicholas V. Scott, Jack McCarthy

Abstract:

Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.

Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization

Procedia PDF Downloads 110
88 The Potential of Sown Pastures as Feedstock for Biofuels in Brazil

Authors: Danilo G. De Quadros

Abstract:

Biofuels are a priority in the renewable energy agenda. The utilization of tropical grasses to ethanol production is a real opportunity to Brazil reaches the world’s leadership in biofuels production because there are 100 million hectares of sown pastures, which represent 20% of all land and 80% of agricultural areas. Basically, nowadays tropical grasses are used to raise livestock. The results obtained in this research could bring tremendous advance not only to national technology and economy but also to improve social and environmental aspects. Thus, the objective of this work was to estimate, through well-established international models, the potential of biofuels production using sown tropical pastures as feedstocks and to compare the results with sugarcane ethanol, considering state-of-art of conversion technology, advantages and limitations factors. There were used data from national and international literature about forage yield and biochemical conversion yield. Some scenarios were studied to evaluate potential advantages and limitations for cellulosic ethanol production, since non-food feedstock appeal to conversion strategies, passing through harvest, densification, logistics, environmental impacts (carbon and water cycles, nutrient recycling and biodiversity), and social aspects. If Brazil used only 1% of sown pastures to ethanol production by biochemical pathway, with average dry matter yield of 15 metric tons per hectare per year (there are results of 40 tons), resulted annually in 721 billion liters, that represents 10 times more than sugarcane ethanol projected by the Government in 2030. However, more research is necessary to take the results to commercial scale with competitive costs, considering many strategies and methods applied in ethanol production using cellulosic feedstock.

Keywords: biofuels, biochemical pathway, cellulosic ethanol, sustainability

Procedia PDF Downloads 230
87 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same

Authors: Melanie Tang

Abstract:

According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.

Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time

Procedia PDF Downloads 113
86 Electrochemistry and Performance of Bryophylum pinnatum Leaf (BPL) Electrochemical Cell

Authors: M. A. Mamun, M. I. Khan, M. H. Sarker, K. A. Khan, M. Shajahan

Abstract:

The study was carried out to investigate on an innovative invention, Pathor Kuchi Leaf (PKL) cell, which is fueled with PKL sap of widely available plant called Bryophyllum pinnatum as an energy source for use in PKL battery to generate electricity. This battery, a primary source of electricity, has several order of magnitude longer shelf-lives than the traditional Galvanic cell battery, is still under investigation. In this regard, we have conducted some experiments using various instruments including Atomic Absorption Spectrophotometer (AAS), Ultra-Violet Visible spectrophotometer (UV-Vis), pH meter, Ampere-Volt-Ohm Meter (AVO Meter), etc. The AAS, UV-Vis, and pH-metric analysis data provided that the potential and current were produced as the Zn electrode itself acts as reductant while Cu2+ and H+ ions are behaving as the oxidant. The significant influence of secondary salt on current and potential leads to the dissociation of weak organic acids in PKL juice, and subsequent enrichment to the reactant ions by the secondary salt effects. However, the liquid junction potential was not as great as minimized with the opposite transference of organic acid anions and H+ ions as their dissimilar ionic mobilities. Moreover, the large value of the equilibrium constant (K) implies the big change in Gibbs free energy (∆G), the more electromotive force works in electron transfer during the forward electrochemical reaction which coincides with the fast reduction of the weight of zinc plate, revealed the additional electrical work in the presence of PKL sap. This easily fabricated high-performance PKL battery can show an excellent promise during the off-peak across the countryside.

Keywords: Atomic Absorption Spectrophotometer (AAS), Bryophylum Pinnatum Leaf (BPL), electricity, electrochemistry, organic acids

Procedia PDF Downloads 295
85 The Nexus between Downstream Supply Chain Losses and Food Security in Nigeria: Empirical Evidence from the Yam Industry

Authors: Alban Igwe, Ijeoma Kalu, Alloy Ezirim

Abstract:

Food insecurity is a global problem, and the search for food security has assumed a central stage in the global development agenda as the United Nations currently placed zero hunger as a goal number in its sustainable development goals. Nigeria currently ranks 107th out of 113 countries in the global food security index (GFSI), a metric that defines a country's ability to furnish its citizens with food and nutrients for healthy living. Paradoxically, Nigeria is a global leader in food production, ranking 1st in yam (over 70% of global output), beans (over 41% of global output), cassava (20% of global output) and shea nuts, where it commands 53% of global output. Furthermore, it ranks 2nd in millet, sweet potatoes, and cashew nuts. It is Africa's largest producer of rice. So, it is apparent that Nigeria's food insecurity woes must relate to a factor other than food production. We investigated the nexus between food security and downstream supply chain losses in the yam industry with secondary data from the Food and Agricultural Organization (FAOSTAT) and the National Bureau of Statics for the decade 2012-2021. In analyzing the data, multiple regression techniques were used, and findings reveal that downstream losses have a strong positive correlation with food security (r = .763*) and a 58.3% variation in food security is explainable by post-downstream supply chain food losses. The study discovered that yam supply chain losses within the period under review averaged 50.6%, suggestive of the fact that downstream supply chain losses are the drainpipe and the major source of food insecurity in Nigeria. Therefore, the study concluded that there is a significant relationship between downstream supply chain losses and food insecurity and recommended the establishment of food supply chain structures and policies to enhance food security in Nigeria.

Keywords: food security, downstream supply chain losses, yam, nigeria, supply chain

Procedia PDF Downloads 55
84 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 109
83 Dynamic Determination of Spare Engine Requirements for Air Fighters Integrating Feedback of Operational Information

Authors: Tae Bo Jeon

Abstract:

Korean air force is undertaking a big project to replace prevailing hundreds of old air fighters such as F-4, F-5, KF-16 etc. The task is to develop and produce domestic fighters equipped with 2 complete-type engines each. A large number of engines, however, will be purchased as products from a foreign engine maker. In addition to the fighters themselves, secure the proper number of spare engines serves a significant role in maintaining combat readiness and effectively managing the national defense budget due to high cost. In this paper, we presented a model dynamically updating spare engine requirements. Currently, the military administration purchases all the fighters, engines, and spare engines at acquisition stage and does not have additional procurement processes during the life cycle, 30-40 years. With the assumption that procurement procedure during the operational stage is established, our model starts from the initial estimate of spare engine requirements based on limited information. The model then performs military missions and repair/maintenance works when necessary. During operation, detailed field information - aircraft repair and test, engine repair, planned maintenance, administration time, transportation pipeline between base, field, and depot etc., - should be considered for actual engine requirements. At the end of each year, the performance measure is recorded and proceeds to next year when it shows higher the threshold set. Otherwise, additional engine(s) will be bought and added to the current system. We repeat the process for the life cycle period and compare the results. The proposed model is seen to generate far better results appropriately adding spare engines thus avoiding possible undesirable situations. Our model may well be applied to future air force military operations.

Keywords: DMSMS, operational availability, METRIC, PRS

Procedia PDF Downloads 145
82 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac

Abstract:

Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.

Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, trend equations

Procedia PDF Downloads 204
81 Digitalized Cargo Coordination to Eliminate Emissions in the Shipping Ecosystem: A System Dynamical Approach

Authors: Henry Schwartz, Bogdan Iancu, Magnus Gustafsson, Johan Lilius

Abstract:

The shipping sector generates significant amounts of carbon emissions on annual basis. The excess amount of carbon dioxide is harmful for both the environment and the society, and partly for that reason, there is acute interest to decrease the volume of anthropogenic carbon dioxide emissions in shipping. The usage of the existing cargo carrying capacity can be maximized, and the share of time used in actual transportation operations could be increased if the whole transportation and logistics chain was optimized with the aid of information sharing done through a centralized marketplace and an information-sharing platform. The outcome of this change would be decreased carbon dioxide emission volumes produced per each metric ton of cargo transported by a vessel. Cargo coordination is a platform under development that matches the need for waterborne transportation services with the ships that operate at a given moment in time. In this research, the transition towards adopting cargo coordination is modelled with system dynamics. The model encompasses the complex supply-demand relationships of ship operators and cargo owners. The built scenarios predict the pace at which different stakeholders start using the digitalized platform and by doing so reduce the amount of annual CO2 emissions generated. To improve the reliability of the results, various sensitivity analyses considering the pace of transition as well as the overall impact on the environment (carbon dioxide emissions per amount of cargo transported) are conducted. The results of the study can be used to support investors and politicians in decision making towards more environmentally sustainable solutions. In addition, the model provides concepts and ideas for a wider discussion considering the paths towards carbon neutral transportation.

Keywords: carbon dioxide emissions, energy efficiency, sustainable transportation, system dynamics

Procedia PDF Downloads 114