Search results for: Waste Management systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7093

Search results for: Waste Management systems

703 The Index of Sustainable Functionality: An Application for Measuring Sustainability

Authors: G.T. Cirella, L. Tao

Abstract:

The index of sustainable functionality (ISF) is an adaptive, multi-criteria technique that is used to measure sustainability; it is a concept that can be transposed to many regions throughout the world. An ISF application of the Southern Regional Organisation of Councils (SouthROC) in South East Queensland (SEQ) – the fastest growing region in Australia – indicated over a 25 year period an increase of over 10% level of functionality from 58.0% to 68.3%. The ISF of SouthROC utilised methodologies that derived from an expert panel based approach. The overall results attained an intermediate level of functionality which amounted to related concerns of economic progress and lack of social awareness. Within the region, a solid basis for future testing by way of measured changes and developed trends can be established. In this regard as management tool, the ISF record offers support for regional sustainability practice and decision making alike. This research adaptively analyses sustainability – a concept that is lacking throughout much of the academic literature and any reciprocal experimentation. This lack of knowledge base has been the emphasis of where future sustainability research can grow from and prove useful in rapidly growing regions. It is the intentions of this research to help further develop the notions of index-based quantitative sustainability.

Keywords: Environmental engineering, index of sustainable functionality, sustainability indicators, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2381
702 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
701 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
700 Development of a Software about Calculating the Production Parameters in Knitted Garment Plants

Authors: Ender Bulgun, Arzu Vuruskan

Abstract:

Apparel product development is an important stage in the life cycle of a product. Shortening this stage will help to reduce the costs of a garment. The aim of this study is to examine the production parameters in knitwear apparel companies by defining the unit costs, and developing a software to calculate the unit costs of garments and make the cost estimates. In this study, with the help of a questionnaire, different companies- systems of unit cost estimating and cost calculating were tried to be analyzed. Within the scope of the questionnaire, the importance of cost estimating process for apparel companies and the expectations from a new cost estimating program were investigated. According to the results of the questionnaire, it was seen that the majority of companies which participated to the questionnaire use manual cost calculating methods or simple Microsoft Excel spreadsheets to make cost estimates. Furthermore, it was discovered that many companies meet with difficulties in archiving the cost data for future use and as a solution to that problem, it is thought that prior to making a cost estimate, sub units of garment costs which are fabric, accessory and the labor costs should be analyzed and added to the database of the programme beforehand. Another specification of the cost estimating unit prepared in this study is that the programme was designed to consist of two main units, one of which makes the product specification and the other makes the cost calculation. The programme is prepared as a web-based application in order that the supplier, the manufacturer and the customer can have the opportunity to communicate through the same platform.

Keywords: Apparel, cost estimating, design archive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2982
699 Modelling Phytoremediation Rates of Aquatic Macrophytes in Aquaculture Effluent

Authors: E. A. Kiridi, A. O. Ogunlela

Abstract:

Pollutants from aquacultural practices constitute environmental problems and phytoremediation could offer cheaper environmentally sustainable alternative since equipment using advanced treatment for fish tank effluent is expensive to import, install, operate and maintain, especially in developing countries. The main objective of this research was, therefore, to develop a mathematical model for phytoremediation by aquatic plants in aquaculture wastewater. Other objectives were to evaluate the retention times on phytoremediation rates using the model and to measure the nutrient level of the aquaculture effluent and phytoremediation rates of three aquatic macrophytes, namely; water hyacinth (Eichornia crassippes), water lettuce (Pistial stratoites) and morning glory (Ipomea asarifolia). A completely randomized experimental design was used in the study. Approximately 100 g of each macrophyte were introduced into the hydroponic units and phytoremediation indices monitored at 8 different intervals from the first to the 28th day. The water quality parameters measured were pH and electrical conductivity (EC). Others were concentration of ammonium–nitrogen (NH4+ -N), nitrite- nitrogen (NO2- -N), nitrate- nitrogen (NO3- -N), phosphate –phosphorus (PO43- -P), and biomass value. The biomass produced by water hyacinth was 438.2 g, 600.7 g, 688.2 g and 725.7 g at four 7–day intervals. The corresponding values for water lettuce were 361.2 g, 498.7 g, 561.2 g and 623.7 g and for morning glory were 417.0 g, 567.0 g, 642.0 g and 679.5g. Coefficient of determination was greater than 80% for EC, TDS, NO2- -N, NO3- -N and 70% for NH4+ -N using any of the macrophytes and the predicted values were within the 95% confidence interval of measured values. Therefore, the model is valuable in the design and operation of phytoremediation systems for aquaculture effluent.

Keywords: Phytoremediation, macrophytes, hydroponic unit, aquaculture effluent, mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
698 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
697 Cooperative Energy Efficient Routing for Wireless Sensor Networks in Smart Grid Communications

Authors: Ghazi AL-Sukkar, Iyad Jafar, Khalid Darabkh, Raed Al-Zubi, Mohammed Hawa

Abstract:

Smart Grids employ wireless sensor networks for their control and monitoring. Sensors are characterized by limitations in the processing power, energy supply and memory spaces, which require a particular attention on the design of routing and data management algorithms. Since most routing algorithms for sensor networks, focus on finding energy efficient paths to prolong the lifetime of sensor networks, the power of sensors on efficient paths depletes quickly, and consequently sensor networks become incapable of monitoring events from some parts of their target areas. In consequence, the design of routing protocols should consider not only energy efficiency paths, but also energy efficient algorithms in general. In this paper we propose an energy efficient routing protocol for wireless sensor networks without the support of any location information system. The reliability and the efficiency of this protocol have been demonstrated by simulation studies where we compare them to the legacy protocols. Our simulation results show that these algorithms scale well with network size and density.

Keywords: Data-centric storage, Dynamic Address Allocation, Sensor networks, Smart Grid Communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
696 Degree of Milling Effects on the Sorghum (Sorghum bicolor) Flours, Physicochemical Properties and Kinetics of Starch Digestion

Authors: Brou K., Guéhi T., Konan A. G., Gbakayoro J. B., Gnakri D.

Abstract:

Two types of crushing were applied to grains of red sorghum: manual crushing using a mortar and pestle of kitchen and mechanical crushing using a hammer mill. The flours obtained at the end of these various crushing were filtered and subdivided in different fractions according to the diameters of the mesh of the sieves (0.16mm; 0.25mm; 0.315mm; 0.4mm, and 0.63mm…). Some physical, chemical and nutritional traits of these flours were evaluated using Association of Official Analytical Chemists (AOAC). In vitro digestibility of these flours was also studied with freezing of flour 1% like substrate and α-amylase from B. licheniformis (E.C.3.2.1.1; Megazyme, Wicklow, Ireland). The results revealed that the batches of flours which have the finest diameters as 0.16mm; 0.25mm are the richest one in nutrients and are also the most digestible. Also mechanical crushing is the best mean to obtain significant amount of flours. In conclusion, the type of crushing and the size of the particles have an impact on the final concentration of some nutrients of the flours obtained. Indeed, the finest particles (0.16mm – 0.25mm 0.315mm) obtained after sifting of the flours are more nutritive and have a better digestibility than others size. So the finest particles could be advised for management of cereals namely the sorghum for the production of the infantile foods.

Keywords: Nutrients, digestibility, crush, flour, milling, granulometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
695 Comparison of Stochastic Point Process Models of Rainfall in Singapore

Authors: Y. Lu, X. S. Qin

Abstract:

Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.

Keywords: Rainfall disaggregation, statistical properties, poisson processed, Bartlett-Lewis model, Neyman-Scott model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2281
694 Tracing Quality Cost in a Luggage Manufacturing Industry

Authors: S. B. Jaju, R. R. Lakhe

Abstract:

Quality costs are the costs associated with preventing, finding, and correcting defective work. Since the main language of corporate management is money, quality-related costs act as means of communication between the staff of quality engineering departments and the company managers. The objective of quality engineering is to minimize the total quality cost across the life of product. Quality costs provide a benchmark against which improvement can be measured over time. It provides a rupee-based report on quality improvement efforts. It is an effective tool to identify, prioritize and select quality improvement projects. After reviewing through the literature it was noticed that a simplified methodology for data collection of quality cost in a manufacturing industry was required. The quantified standard methodology is proposed for collecting data of various elements of quality cost categories for manufacturing industry. Also in the light of research carried out so far, it is felt necessary to standardise cost elements in each of the prevention, appraisal, internal failure and external failure costs. . Here an attempt is made to standardise the various cost elements applicable to manufacturing industry and data is collected by using the proposed quantified methodology. This paper discusses the case study carried in luggage manufacturing industry.

Keywords: Quality Costs, PAF model, quantified methodology, Case study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2254
693 A Reliable Secure Multicast Key Distribution Scheme for Mobile Adhoc Networks

Authors: D. SuganyaDevi, G. Padmavathi

Abstract:

Reliable secure multicast communication in mobile adhoc networks is challenging due to its inherent characteristics of infrastructure-less architecture with lack of central authority, high packet loss rates and limited resources such as bandwidth, time and power. Many emerging commercial and military applications require secure multicast communication in adhoc environments. Hence key management is the fundamental challenge in achieving reliable secure communication using multicast key distribution for mobile adhoc networks. Thus in designing a reliable multicast key distribution scheme, reliability and congestion control over throughput are essential components. This paper proposes and evaluates the performance of an enhanced optimized multicast cluster tree algorithm with destination sequenced distance vector routing protocol to provide reliable multicast key distribution. Simulation results in NS2 accurately predict the performance of proposed scheme in terms of key delivery ratio and packet loss rate under varying network conditions. This proposed scheme achieves reliability, while exhibiting low packet loss rate with high key delivery ratio compared with the existing scheme.

Keywords: Key Distribution, Mobile Adhoc Network, Multicast and Reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637
692 Assessment of the Vulnerability and Risk of Climate Change on Water Supply and Demand in Taijiang Area

Authors: Yu-Chen Lin, Tzong-Yeang Lee, Hung-Chih Shih

Abstract:

The development of sustainable utilization water resources is crucial. The ecological environment and water resources systems form the foundation of the existence and development of the social economy. The urban ecological support system depends on these resources as well. This research studies the vulnerability, criticality, and risk of climate change on water supply and demand in the main administrative district of the Taijiang Area (Tainan City). Based on the two situations set in this paper and various factors (indexes), this research adopts two kinds of weights (equal and AHP) to conduct the calculation and establish the water supply and demand risk map for the target year 2039. According to the risk analysis result, which is based on equal weight, only one district belongs to a high-grade district (Grade 4). Based on the AHP weight, 16 districts belong to a high-grade or higher-grade district (Grades 4 and 5), and from among them, two districts belong to the highest grade (Grade 5). These results show that the risk level of water supply and demand in cities is higher than that in towns. The government generally gives more attention to the adjustment strategy in the “cities." However, it should also provide proper adjustment strategies for the “towns" to be able to cope with the risks of water supply and demand.

Keywords: Climate change, risk, vulnerability, water supply and demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57133
691 Simulating Economic Order Quantity and Reorder Point Policy for a Repairable Items Inventory System

Authors: Mojahid F. Saeed Osman

Abstract:

Repairable items inventory system is a management tool used to incorporate all information concerning inventory levels and movements for repaired and new items. This paper presents development of an effective simulation model for managing the inventory of repairable items for a production system where production lines send their faulty items to a repair shop considering the stochastic failure behavior and repair times. The developed model imitates the process of handling the on-hand inventory of repaired items and the replenishment of the inventory of new items using Economic Order Quantity and Reorder Point ordering policy in a flexible and risk-free environment. We demonstrate the appropriateness and effectiveness of the proposed simulation model using an illustrative case problem. The developed simulation model can be used as a reliable tool for estimating a healthy on-hand inventory of new and repaired items, backordered items, and downtime due to unavailability of repaired items, and validating and examining Economic Order Quantity and Reorder Point ordering policy, which would further be compared with other ordering strategies as future work.

Keywords: Inventory system, repairable items, simulation, maintenance, economic order quantity, reorder point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
690 Treatment of the Modern Management Mechanism of the Debris Flow Processes Expected in the Mletiskhevi

Authors: G. Chakhaia, S. Gogilava, L. Tsulukidze, Z. Laoshvili, I. Khubulava, S. Bosikashvili, T. Gugushvili

Abstract:

The work reviewed and evaluated various genesis debris flow phenomena recently formatted in the Mletiskhevi, accordingly it revealed necessity of treatment modern debris flow against measures. Based on this, it is proposed the debris flow against truncated semi cone shape construction, which elements are contained in the car’s secondary tires. its constituent elements (sections), due to the possibilities of amortization and geometric shapes is effective and sustainable towards debris flow hitting force. The construction is economical, because after crossing the debris flows in the river bed, the riverbed is not cleanable, also the elements of the building are resource saving. For assessment of influence of cohesive debris flow at the construction and evaluation of the construction effectiveness have been implemented calculation in the specific assumptions with approved methodology. According to the calculation, it was established that after passing debris flow in the debris flow construction (in 3 row case) its hitting force reduces 3 times, that causes reduce of debris flow speed and kinetic energy, as well as sedimentation on a certain section of water drain in the lower part of the construction. Based on the analysis and report on the debris flow against construction, it can be said that construction is effective, inexpensive, technically relatively easy-to-reach measure, that’s why its implementation is prospective.

Keywords: Construction, debris flow, sections, theoretical calculation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400
689 Approximation of PE-MOCVD to ALD for TiN Concerning Resistivity and Chemical Composition

Authors: D. Geringswald, B. Hintze

Abstract:

The miniaturization of circuits is advancing. During chip manufacturing, structures are filled for example by metal organic chemical vapor deposition (MOCVD). Since this process reaches its limits in case of very high aspect ratios, the use of alternatives such as the atomic layer deposition (ALD) is possible, requiring the extension of existing coating systems. However, it is an unsolved question to what extent MOCVD can achieve results similar as an ALD process. In this context, this work addresses the characterization of a metal organic vapor deposition of titanium nitride. Based on the current state of the art, the film properties coating thickness, sheet resistance, resistivity, stress and chemical composition are considered. The used setting parameters are temperature, plasma gas ratio, plasma power, plasma treatment time, deposition time, deposition pressure, number of cycles and TDMAT flow. The derived process instructions for unstructured wafers and inside a structure with high aspect ratio include lowering the process temperature and increasing the number of cycles, the deposition and the plasma treatment time as well as the plasma gas ratio of hydrogen to nitrogen (H2:N2). In contrast to the current process configuration, the deposited titanium nitride (TiN) layer is more uniform inside the entire test structure. Consequently, this paper provides approaches to employ the MOCVD for structures with increasing aspect ratios.

Keywords: ALD, high aspect ratio, PE-MOCVD, TiN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
688 Automated Vehicle Traffic Control Tower: A Solution to Support the Next Level Automation

Authors: Xiaoyun Zhao, Rami Darwish, Anna Pernestål

Abstract:

Automated vehicles (AVs) have the potential to enhance road capacity, improving road safety and traffic efficiency. Research and development on AVs have been going on for many years. However, when the complicated traffic rules and real situations interacted, AVs fail to make decisions on contradicting situations, and are not able to have control in all conditions due to highly dynamic driving scenarios. This limits AVs’ usage and restricts the full potential benefits that they can bring. Furthermore, regulations, infrastructure development, and public acceptance cannot keep up at the same pace as technology breakthroughs. Facing these challenges, this paper proposes automated vehicle traffic control tower (AVTCT) acting as a safe, efficient and integrated solution for AV control. It introduces a concept of AVTCT for control, management, decision-making, communication and interaction with various aspects in transportation. With the prototype demonstrations and simulations, AVTCT has the potential to overcome the control challenges with AVs and can facilitate AV reaching their full potential. Possible functionalities, benefits as well as challenges of AVTCT are discussed, which set the foundation for the conceptual model, simulation and real application of AVTCT.

Keywords: Automated vehicle, connectivity and automation, intelligent transport system, traffic control, traffic safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
687 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
686 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411
685 The Relationship between Competency-Based Learning and Learning Efficiency of Media Communication Students at Suan Sunandha Rajabhat University

Authors: Somtop Keawchuer

Abstract:

This research aims to study (1) the relationship between competency-based learning and learning efficiency of new media communication students at Suan Sunandha University (2) the demographic factor effect on learning efficiency of students at Suan Sunandha University. This research method will use quantitative research; data was collected by questionnaires distributed to students from new media communication in management science faculty of Suan Sunandha Rajabhat University for 1340 sample by purposive sampling method. Data was analyzed by descriptive statistic including percentage, mean, standard deviation and inferential statistic including T-test, ANOVA and Pearson correlation for hypothesis testing. The results showed that the competency-based learning in term of ability to communicate, ability to think and solve the problem, life skills and ability to use technology has a significant relationship with learning efficiency in term of the cognitive domain, psychomotor domain and affective domain at the 0.05 level and which is in harmony with the research hypotheses.

Keywords: Competency-based learning, learning efficiency, new media communication students, Suan Sunandha Rajabhat University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1179
684 Biosensor Design through Molecular Dynamics Simulation

Authors: Wenjun Zhang, Yunqing Du, Steven W. Cranford, Ming L. Wang

Abstract:

The beginning of 21st century has witnessed new advancements in the design and use of new materials for biosensing applications, from nano to macro, protein to tissue. Traditional analytical methods lack a complete toolset to describe the complexities introduced by living systems, pathological relations, discrete hierarchical materials, cross-phase interactions, and structure-property dependencies. Materiomics – via systematic molecular dynamics (MD) simulation – can provide structureprocess- property relations by using a materials science approach linking mechanisms across scales and enables oriented biosensor design. With this approach, DNA biosensors can be utilized to detect disease biomarkers present in individuals’ breath such as acetone for diabetes. Our wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) has successfully detected trace amount of various chemicals in vapor differentiated by pattern recognition. Here, we present how MD simulation can revolutionize the way of design and screening of DNA aptamers for targeting biomarkers related to oral diseases and oral health monitoring. It demonstrates great potential to be utilized to build a library of DNDA sequences for reliable detection of several biomarkers of one specific disease, and as well provides a new methodology of creating, designing, and applying of biosensors.

Keywords: Biosensor, design, DNA, molecular dynamics simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3036
683 The Bright Side of Organizational Politics as a Driver of Firm Competitiveness: The Mediating Role of Corporate Entrepreneurship

Authors: Monika Kulikowska-Pawlak, Katarzyna Bratnicka-Myśliwiec, Tomasz Ingram

Abstract:

This study seeks to contribute to the literature on firm competitiveness by advancing the perspective of organizational politics that views this process as a driver which creates identifiable differences in firm performance. The hypothesized relationships were tested on the basis of data from 355 Polish medium and large-sized enterprises. Data were analyzed using correlation analysis, EFA and robustness tests. The main result of the conducted analyses proved the coexistence, previously examined in the literature, of corporate entrepreneurship and firm performance. The obtained research findings made it possible to add organizational politics to a wide range of elements determining corporate entrepreneurship, followed by competitive advantage, in addition to antecedents such as strategic leadership, corporate culture, opportunity-oriented resource-based management, etc. Also, the empirical results suggest that four dimensions of organizational politics (dominant coalition, influence exertion, making organizational changes, and information openness) are positively related to firm competitiveness. In addition, these findings seem to underline a supposition that corporate entrepreneurship is an important mediator which strengthens the competitive effects of organizational politics.

Keywords: Corporate entrepreneurship, firm competitiveness organizational politics, sensemaking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
682 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 463
681 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010
680 Comparing the Behaviour of the FRP and Steel Reinforced Shear Walls under Cyclic Seismic Loading in Aspect of the Energy Dissipation

Authors: H. Rahman, T. Donchev, D. Petkova

Abstract:

Earthquakes claim thousands of lives around the world annually due to inadequate design of lateral load resisting systems particularly shear walls. Additionally, corrosion of the steel reinforcement in concrete structures is one of the main challenges in construction industry. Fibre Reinforced Polymer (FRP) reinforcement can be used as an alternative to traditional steel reinforcement. FRP has several excellent mechanical properties than steel such as high resistance to corrosion, high tensile strength and light self-weight; additionally, it has electromagnetic neutrality advantageous to the structures where it is important such as hospitals, some laboratories and telecommunications. This paper is about results of experimental research and it is incorporating experimental testing of two medium-scale concrete shear wall samples; one reinforced with Basalt FRP (BFRP) bar and one reinforced with steel bars as a control sample. The samples are tested under quasi-static-cyclic loading following modified ATC-24 protocol standard seismic loading. The results of both samples are compared to allow a judgement about performance of BFRP reinforced against steel reinforced concrete shear walls. The results of the conducted researches show a promising momentum toward utilisation of the BFRP as an alternative to traditional steel reinforcement with the aim of improving durability with suitable energy dissipation in the reinforced concrete shear walls.  

Keywords: Shear walls, internal FRP reinforcement, cyclic loading, energy dissipation and seismic behaviour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
679 Energy Conscious Builder Design Pattern with C# and Intermediate Language

Authors: Kayun Chantarasathaporn, Chonawat Srisa-an

Abstract:

Design Patterns have gained more and more acceptances since their emerging in software development world last decade and become another de facto standard of essential knowledge for Object-Oriented Programming developers nowadays. Their target usage, from the beginning, was for regular computers, so, minimizing power consumption had never been a concern. However, in this decade, demands of more complicated software for running on mobile devices has grown rapidly as the much higher performance portable gadgets have been supplied to the market continuously. To get along with time to market that is business reason, the section of software development for power conscious, battery, devices has shifted itself from using specific low-level languages to higher level ones. Currently, complicated software running on mobile devices are often developed by high level languages those support OOP concepts. These cause the trend of embracing Design Patterns to mobile world. However, using Design Patterns directly in software development for power conscious systems is not recommended because they were not originally designed for such environment. This paper demonstrates the adapted Design Pattern for power limitation system. Because there are numerous original design patterns, it is not possible to mention the whole at once. So, this paper focuses only in creating Energy Conscious version of existing regular "Builder Pattern" to be appropriated for developing low power consumption software.

Keywords: Design Patterns, Builder Pattern, Low Power Consumption, Object Oriented Programming, Power Conscious System, Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
678 A Case Study on the Value of Corporate Social Responsibility Systems

Authors: José M. Brotons, Manuel E. Sansalvador

Abstract:

The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.

Keywords: Corporate social responsibility, case study, financial performance, company valuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
677 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data

Authors: N. Borjalilu, P. Rabiei, A. Enjoo

Abstract:

Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.

Keywords: F-TOPSIS, fuzzy set, FDM, flight safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
676 A Distributed Cognition Framework to Compare E-Commerce Websites Using Data Envelopment Analysis

Authors: C. lo Storto

Abstract:

This paper presents an approach based on the adoption of a distributed cognition framework and a non parametric multicriteria evaluation methodology (DEA) designed specifically to compare e-commerce websites from the consumer/user viewpoint. In particular, the framework considers a website relative efficiency as a measure of its quality and usability. A website is modelled as a black box capable to provide the consumer/user with a set of functionalities. When the consumer/user interacts with the website to perform a task, he/she is involved in a cognitive activity, sustaining a cognitive cost to search, interpret and process information, and experiencing a sense of satisfaction. The degree of ambiguity and uncertainty he/she perceives and the needed search time determine the effort size – and, henceforth, the cognitive cost amount – he/she has to sustain to perform his/her task. On the contrary, task performing and result achievement induce a sense of gratification, satisfaction and usefulness. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 40 websites of businesses performing electronic commerce in the information technology market. A questionnaire to collect subjective judgements for the websites in the sample was purposely designed and administered to 85 university students enrolled in computer science and information systems engineering undergraduate courses.

Keywords: Website, e-commerce, DEA, distributed cognition, evaluation, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
675 Choice Experiment Approach on Evaluation of Non-Market Farming System Outputs: First Results from Lithuanian Case Study

Authors: A. Novikova, L. Rocchi, G. Startiene

Abstract:

Market and non-market outputs are produced jointly in agriculture. Their supply depends on the intensity and type of production. The role of agriculture as an economic activity and its effects are important for the Lithuanian case study, as agricultural land covers more than a half of country. Positive and negative externalities, created in agriculture are not considered in the market. Therefore, specific techniques such as stated preferences methods, in particular choice experiments (CE) are used for evaluation of non-market outputs in agriculture. The main aim of this paper is to present construction of the research path for evaluation of non-market farming system outputs in Lithuania. The conventional and organic farming, covering crops (including both cereal and industrial crops) and livestock (including dairy and cattle) production has been selected. The CE method and nested logit (NL) model were selected as appropriate for evaluation of non-market outputs of different farming systems in Lithuania. A pilot survey was implemented between October–November 2018, in order to test and improve the CE questionnaire. The results of the survey showed that the questionnaire is accepted and well understood by the respondents. The econometric modelling showed that the selected NL model could be used for the main survey. The understanding of the differences between organic and conventional farming by residents was identified. It was revealed that they are more willing to choose organic farming in comparison to conventional farming.

Keywords: Choice experiments, farming system, Lithuania market outputs, non-market outputs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 600
674 Evolution of Web Development Techniques in Modern Technology

Authors: Abdul Basit Kiani, Maryam Kiani

Abstract:

The art of web development in new technologies is a dynamic journey, shaped by the constant evolution of tools and platforms. With the emergence of JavaScript frameworks and APIs, web developers are empowered to craft web applications that are not only robust but also highly interactive. The aim is to provide an overview of the developments in the field. The integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.

Keywords: Web development, software testing, progressive web apps, web and mobile native application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 382