Search results for: efficient inventory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2174

Search results for: efficient inventory

344 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights

Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan

Abstract:

The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.

Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67
343 An Evaluation of TIG Welding Parametric Influence on Tensile Strength of 5083 Aluminium Alloy

Authors: Lakshman Singh, Rajeshwar Singh, Naveen Kumar Singh, Davinder Singh, Pargat Singh

Abstract:

Tungsten Inert Gas (TIG) welding is a high quality welding process used to weld the thin metals and their alloy. 5083 Aluminium alloys play an important role in engineering and metallurgy field because of excellent corrosion properties, ease of fabrication and high specific strength coupled with best combination of toughness and formability.

TIG welding technique is one of the precise and fastest processes used in aerospace, ship and marine industries. TIG welding process is used to analyze the data and evaluate the influence of input parameters on tensile strength of 5083 Al-alloy specimens with dimensions of 100mm long x 15mm wide x 5mm thick. Welding current (I), gas flow rate (G) and welding speed (S) are the input parameters which effect tensile strength of 5083 Al-alloy welded joints. As welding speed increased, tensile strength increases first till optimum value and after that both decreases by increasing welding speed further. Results of the study show that maximum tensile strength of 129 MPa of weld joint are obtained at welding current of 240 Amps, gas flow rate of 7 Lt/min and welding speed of 98 mm/min. These values are the optimum values of input parameters which help to produce efficient weld joint that have good mechanical properties as a tensile strength.

Keywords: 5083 Aluminium alloy, Gas flow rate, TIG welding, Welding current, Welding speed and Tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4084
342 Comparative Analysis of Ranunculus muricatus and Typha latifolia as Wetland Plants Applied for Domestic Wastewater Treatment in a Mesocosm Scale Study

Authors: S. Aziz, M. Ali, S. Asghar, S. Ahmed

Abstract:

Comparing other methods of waste water treatment, constructed wetlands are one of the most fascinating practices because being a natural process they are eco-friendly have low construction and maintenance cost and have considerable capability of wastewater treatment. The current research was focused mainly on comparison of Ranunculus muricatus and Typha latifolia as wetland plants for domestic wastewater treatment by designing and constructing efficient pilot scale horizontal subsurface flow mesocosms. Parameters like chemical oxygen demand, biological oxygen demand, phosphates, sulphates, nitrites, nitrates, and pathogenic indicator microbes were studied continuously with successive treatments. Treatment efficiency of the system increases with passage of time and with increase in temperature. Efficiency of T. latifolia planted setups in open environment was fairly good for parameters like COD and BOD5 which was showing reduction up to 82.5% for COD and 82.6% for BOD5 while DO was increased up to 125%. Efficiency of R. muricatus vegetated setup was also good but lowers than that of T. latifolia planted showing 80.95% removal of COD and BOD5. Ranunculus muricatus was found effective in reducing bacterial count in wastewater. Both macrophytes were found promising in wastewater treatment.

Keywords: Biological oxygen demand, chemical oxygen demand, horizontal subsurface flow, Total suspended solids, Wetland.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2635
341 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise

Authors: Aïssa Rezzoug

Abstract:

This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.

Keywords: Flood, groundwater rise, Jeddah, tide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 499
340 Neural Network Implementation Using FPGA: Issues and Application

Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan

Abstract:

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4423
339 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experience of space. Addressing these gaps, this paper presents a continuous visibility algorithm, providing a potentially valuable approach to measuring urban spaces from a human - centric perspective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this technique allows for a continuous range of visibility assessment, closely mirroring human visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Beijing's urban setting. Its key distinction lies in its potential to benefit a broad spectrum of stakeholders, ranging from urban developers to public policymakers, aiding in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: Visual openness, spatial continuity, ray-tracing algorithms, urban computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29
338 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
337 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
336 Virtual Conciliation in Colombia: Evaluation of Maturity Level within the Framework of E-Government

Authors: Jenny Paola Forero Pachón, Sonia Cristina Gamboa Sarmiento, Luis Carlos Gómez Flórez

Abstract:

The Colombian government has defined an e-government strategy to take advantage of Information Technologies (IT) in order to contribute to the building of a more efficient, transparent and participative State that provides better services to citizens and businesses. In this regard, the Justice sector is one of the government sectors where IT has generated more expectation considering that the country has a judicial processes backlog. This situation has led to the search for alternative forms of access to justice that speed up the process while providing a low cost for citizens. To this end, the Colombian government has authorized the use of Alternative Dispute Resolution methods (ADR), a remedy where disputes can be resolved more quickly compared to judicial processes while facilitating greater communication between the parties, without recourse to judicial authority. One of these methods is conciliation, which includes a special modality that takes advantage of IT for the development of itself known as virtual conciliation. With this option the conciliation is supported by information systems, applications or platforms and communications are provided through it. This paper evaluates the level of maturity in how the service of virtual conciliation is under the framework of this strategy. This evaluation is carried out considering Shahkooh's 5-phase model for e-government. As a result, it is evident that in the context of conciliation, maturity does not reach the necessary level in the model so that it can be considered as virtual conciliation; therefore, it is necessary to define strategies to maximize the potential of IT in this context.

Keywords: Alternative dispute resolution, e-government, evaluation of maturity, Shahkooh model, virtual conciliation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
335 Efficiency Based Model for Solar Urban Planning

Authors: Amado, M. P., Amado, A., Poggi, F., Correia de Freitas, J.

Abstract:

Today is widely understood that global energy consumption patterns are directly related to the urban expansion and development process. This expansion is based on the natural growth of human activities and has left most urban areas totally dependent on fossil fuel derived external energy inputs. This status-quo of production, transportation, storage and consumption of energy has become inefficient and is set to become even more so when the continuous increases in energy demand are factored in. The territorial management of land use and related activities is a central component in the search for more efficient models of energy use, models that can meet current and future regional, national and European goals.

In this paper a methodology is developed and discussed with the aim of improving energy efficiency at the municipal level. The development of this methodology is based on the monitoring of energy consumption and its use patterns resulting from the natural dynamism of human activities in the territory and can be utilized to assess sustainability at the local scale. A set of parameters and indicators are defined with the objective of constructing a systemic model based on the optimization, adaptation and innovation of the current energy framework and the associated energy consumption patterns. The use of the model will enable local governments to strike the necessary balance between human activities and economic development and the local and global environment while safeguarding fairness in the energy sector.

Keywords: Solar urban planning, solar smart city, urban development, energy efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
334 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
333 An Algorithm Proposed for FIR Filter Coefficients Representation

Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman

Abstract:

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3174
332 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
331 Investigation Corn and Soybean Intercropping Advantages in Competition with Redroot Pigweed and Jimsonweed

Authors: M. Rezvani, F. Zaefarian, M. Aghaalikhani, H. Rahimian Mashhadi, E. Zand

Abstract:

The spatial variation in plant species associated with intercropping is intended to reduce resource competition between species and increase yield potential. A field experiment was carried out on corn (Zea mays L.) and soybean (Glycine max L.) intercropping in a replacement series experiment with weed contamination consist of: weed free, infestation of redroot pigweed, infestation of jimsonweed and simultaneous infestation of redroot pigweed and jimsonweed in Karaj, Iran during 2007 growing season. The experimental design was a randomized complete block in factorial experiment with replicated thrice. Significant (P≤0.05) differences were observed in yield in intercropping. Corn yield was higher in intercropping, but soybean yield was significantly reduced by corn when intercropped. However, total productivity and land use efficiency were high under the intercropping system even in contamination of either species of weeds. Aggressivity of corn relative to soybean revealed the greater competitive ability of corn than soybean. Land equivalent ratio (LER) more than 1 in all treatments attributed to intercropping advantages and was highest in 50: 50 (corn/soybean) in weed free. These findings suggest that intercropping corn and soybean increase total productivity per unit area and improve land use efficiency. Considering the experimental findings, corn-soybean intercropping (50:50) may be recommended for yield advantage, more efficient utilization of resources, and weed suppression as a biological control.

Keywords: Corn, soybean, intercropping, redroot pigweed, jimsonweed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2605
330 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: M. Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: land use/land cover, random forest, Landsat-8 OLI, Sentinel-2A MSI, Corine land cover

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339
329 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
328 Evaluation of the Microscopic-Observation Drug-Susceptibility Assay Drugs Concentration for Detection of Multidrug-Resistant Tuberculosis

Authors: Anita, Sari Septiani Tangke, Rusdina Bte Ladju, Nasrum Massi

Abstract:

New diagnostic tools are urgently needed to interrupt the transmission of tuberculosis and multidrug-resistant tuberculosis. The microscopic-observation drug-susceptibility (MODS) assay is a rapid, accurate and simple liquid culture method to detect multidrug-resistant tuberculosis (MDR-TB). MODS were evaluated to determine a lower and same concentration of isoniazid and rifampin for detection of MDR-TB. Direct drug-susceptibility testing was performed with the use of the MODS assay. Drug-sensitive control strains were tested daily. The drug concentrations that used for both isoniazid and rifampin were at the same concentration: 0.16, 0.08 and 0.04μg per milliliter. We tested 56 M. tuberculosis clinical isolates and the control strains M. tuberculosis H37RV. All concentration showed same result. Of 53 M. tuberculosis clinical isolates, 14 were MDR-TB, 38 were susceptible with isoniazid and rifampin, 1 was resistant with isoniazid only. Drug-susceptibility testing was performed with the use of the proportion method using Mycobacteria Growth Indicator Tube (MGIT) system as reference. The result of MODS assay using lower concentration was significance (P<0.001) compare with the reference methods.

A lower and same concentration of isoniazid and rifampin can be used to detect MDR-TB. Operational cost and application can be more efficient and easier in resource-limited environments. However, additional studies evaluating the MODS using lower and same concentration of isoniazid and rifampin must be conducted with a larger number of clinical isolates.

Keywords: Isoniazid, MODS assay, MDR-TB, Rifampin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
327 Comparative Study on Status and Development of Transient Flow Analysis Including Simple Surge Tank

Authors: I. Abuiziah, A. Oulhaj, K. Sebari, D. Ouazar

Abstract:

This paper presents the problem of modeling and simulating of transient phenomena in conveying pipeline systems based on the rigid column and full elastic methods. Transient analysis is important and one of the more challenging and complicated flow problem in the design and the operation of water pipeline systems. Transient can produce large pressure forces and rapid fluid acceleration into a water pipeline system, these disturbances may result in device failures, system fatigue or pipe ruptures, and even the dirty water intrusion. Several methods have been introduced and used to analyze transient flow, an accurate analysis and suitable protection devices should be used to protect water pipeline systems. The fourth-order Runge-Kutta method has been used to solve the dynamic and continuity equations in the rigid column method, while the characteristics method used to solve these equations in the full elastic method. The results obtained provide that the model is an efficient tool for flow transient analysis and provide approximately identical results by using these two methods. Moreover; using the simple surge tank ”open surge tank” reduces the unfavorable effects of transients.

Keywords: Elastic method, Flow transient, Open surge tank, Pipeline, Protection devices, Numerical model, Rigid column method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2992
326 A Green Design for Assembly Model for Integrated Design Evaluation and Assembly and Disassembly Sequence Planning

Authors: Yuan-Jye Tseng, Fang-Yu Yu, Feng-Yi Huang

Abstract:

A green design for assembly model is presented to integrate design evaluation and assembly and disassembly sequence planning by evaluating the three activities in one integrated model. For an assembled product, an assembly sequence planning model is required for assembling the product at the start of the product life cycle. A disassembly sequence planning model is needed for disassembling the product at the end. In a green product life cycle, it is important to plan how a product can be disassembled, reused, or recycled, before the product is actually assembled and produced. Given a product requirement, there may be several design alternative cases to design the same product. In the different design cases, the assembly and disassembly sequences for producing the product can be different. In this research, a new model is presented to concurrently evaluate the design and plan the assembly and disassembly sequences. First, the components are represented by using graph based models. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.

Keywords: green design, assembly and disassembly sequence planning, green design for assembly, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
325 Thiosulfate Leaching of the Auriferous Ore from Castromil Deposit: A Case Study

Authors: Rui Sousa, Aurora Futuro, António Fiúza

Abstract:

The exploitation of gold ore deposits is highly dependent on efficient mineral processing methods, although actual perspectives based on life-cycle assessment introduce difficulties that were unforeseen in a very recent past. Cyanidation is the most applied gold processing method, but the potential environmental problems derived from the usage of cyanide as leaching reagent led to a demand for alternative methods. Ammoniacal thiosulfate leaching is one of the most important alternatives to cyanidation. In this article, some experimental studies carried out in order to assess the feasibility of thiosulfate as a leaching agent for the ore from the unexploited Portuguese gold mine of Castromil. It became clear that the process depends on the concentrations of ammonia, thiosulfate and copper. Based on this fact, a few leaching tests were performed in order to assess the best reagent prescription, and also the effects of different combination of these concentrations. Higher thiosulfate concentrations cause the decrease of gold dissolution. Lower concentrations of ammonia require higher thiosulfate concentrations, and higher ammonia concentrations require lower thiosulfate concentrations. The addition of copper increases the gold dissolution ratio. Subsequently, some alternative operatory conditions were tested such as variations in temperature and in the solid/liquid ratio as well as the application of a pre-treatment before the leaching stage. Finally, thiosulfate leaching was compared to cyanidation. Thiosulfate leaching showed to be an important alternative, although a pre-treatment is required to increase the yield of the gold dissolution.

Keywords: Gold, leaching, pre-treatment, thiosulfate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
324 Image Restoration in Non-Linear Filtering Domain using MDB approach

Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, C. Ardil

Abstract:

This paper proposes a new technique based on nonlinear Minmax Detector Based (MDB) filter for image restoration. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Image degradation can be due to the addition of different types of noise in the original image. Image noise can be modeled of many types and impulse noise is one of them. Impulse noise generates pixels with gray value not consistent with their local neighborhood. It appears as a sprinkle of both light and dark or only light spots in the image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly efficient but for large window and in case of high noise it gives rise to more blurring to image. The Centre Weighted Mean (CWM) filter has got a better average performance over the median filter. However the original pixel corrupted and noise reduction is substantial under high noise condition. Hence this technique has also blurring affect on the image. To illustrate the superiority of the proposed approach, the proposed new scheme has been simulated along with the standard ones and various restored performance measures have been compared.

Keywords: Filtering, Minmax Detector Based (MDB), noise, centre weighted mean filter, PSNR, restoration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2739
323 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment

Authors: David M. Jenkins Jr.

Abstract:

Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.

Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
322 Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method

Authors: Diana Gómez-Cano, J. C. Ochoa-Botero, Roberto Bernal Correa, Yhan Paul Arias

Abstract:

To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.

Keywords: Alternative raw materials, halogen light, recycled mortar, resources optimization, water absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 535
321 Swarm Intelligence based Optimal Linear Phase FIR High Pass Filter Design using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach

Authors: Sangeeta Mandal, Rajib Kar, Durbadal Mandal, Sakti Prasad Ghoshal

Abstract:

In this paper, an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach (PSO-CFIWA) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient based optimization techniques are not efficient for digital filter design. Given the filter specifications to be realized, the PSO-CFIWA algorithm generates a set of optimal filter coefficients and tries to meet the ideal frequency response characteristic. In this paper, for the given problem, the designs of the optimal FIR high pass filters of different orders have been performed. The simulation results have been compared to those obtained by the well accepted algorithms such as Parks and McClellan algorithm (PM), genetic algorithm (GA). The results justify that the proposed optimal filter design approach using PSOCFIWA outperforms PM and GA, not only in the accuracy of the designed filter but also in the convergence speed and solution quality.

Keywords: FIR Filter; PSO-CFIWA; PSO; Parks and McClellanAlgorithm, Evolutionary Optimization Technique; MagnitudeResponse; Convergence; High Pass Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
320 Analysis of a Lignocellulose Degrading Microbial Consortium to Enhance the Anaerobic Digestion of Rice Straws

Authors: Supanun Kangrang, Kraipat Cheenkachorn, Kittiphong Rattanaporn, Malinee Sriariyanun

Abstract:

Rice straw is lignocellulosic biomass which can be utilized as substrate for the biogas production. However, due to the property and composition of rice straw, it is difficult to be degraded by hydrolysis enzymes. One of the pretreatment methods that modify such properties of lignocellulosic biomass is the application of lignocellulose-degrading microbial consortia. The aim of this study is to investigate the effect of microbial consortia to enhance biogas production. To select the high efficient consortium, cellulase enzymes were extracted and their activities were analyzed. The results suggested that microbial consortium culture obtained from cattle manure is the best candidate compared to decomposed wood and horse manure. A microbial consortium isolated from cattle manure was then mixed with anaerobic sludge and used as inoculum for biogas production. The optimal conditions for biogas production were investigated using response surface methodology (RSM). The tested parameters were the ratio of amount of microbial consortium isolated and amount of anaerobic sludge (MI:AS), substrate to inoculum ratio (S:I) and temperature. Here, the value of the regression coefficient R2 = 0.7661 could be explained by the model which is high to advocate the significance of the model. The highest cumulative biogas yield was 104.6 ml/g-rice straw at optimum ratio of MI:AS, ratio of S:I, and temperature of 2.5:1, 15:1 and 44°C respectively.

Keywords: Lignocellulolytic biomass, microbial consortium, cellulase, biogas, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3324
319 A Biomimetic Structural Form: Developing a Paradigm to Attain Vital Sustainability in Tall Architecture

Authors: Osama Al-Sehail

Abstract:

This paper argues for sustainability as a necessity in the evolution of tall architecture. It provides a different mode for dealing with sustainability in tall architecture, taking into consideration the speciality of its typology. To this end, the article develops a Biomimetic Structural Form as a paradigm to attain Vital Sustainability. A Biomimetic Structural Form, which is derived from the amalgamation of biomimicry as an approach for sustainability defining nature as source of knowledge and inspiration in solving humans’ problems and a Structural Form as a catalyst for evolving tall architecture, is a dynamic paradigm emerging from a conceptualizing and morphological process. A Biomimetic Structural Form is a flow system whose different forces and functions tend to be “better”, more "fit", to “survive”, and to be efficient. Through geometry and function—the two aspects of knowledge extracted from nature—the attributes of the Biomimetic Structural Form are formulated. Vital Sustainability is the survival level of sustainability in natural systems through which a system enhances the performance of its internal working and its interaction with the external environment. A Biomimetic Structural Form, in this context, is a medium for evolving tall architecture to emulate natural models in their ways of coexistence with the environment. As an integral part of this article, the sustainable super tall building 3Ts is discussed as a case study of applying Biomimetic Structural Form.   

Keywords: Biomimicry, design in nature, high-rise buildings, sustainability, structural form, tall architecture, vital sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
318 Review of Affected Parameters on Flexural Behavior of Hollow Concrete Beams Reinforced by Steel/GFRP Rebars

Authors: Shahrad Ebrahimzadeh

Abstract:

Nowadays, the main efforts of the researchers aim to constantly evolve new, optimized, and efficient construction materials and methods related to reinforced concrete beams. Due to the fewer applied materials and higher structural efficiency than solid concrete beams with the same concrete area, hollow reinforced concrete beams (HRCBs) internally reinforced with steel rebars have been employed extensively for bridge structural members and high-rise buildings. Many experimental studies have been conducted to investigate the behavior of hollow beams subjected to bending loading and found that the structural performance of HRCBs is critically affected by many design parameters. While the proper design of the HRCBs demonstrated comparable behavior to solid sections, inappropriate design leads beams to be extremely prone to brittle failure. Another potential issue that needs further investigation is replacing steel bars with suitable materials due to their susceptibility to corrosion. Hence, to develop a reliable construction system, the application of Glass Fiber Reinforced Polymer (GFRP) bars as a non-corroding material has been utilized. Furthermore, this study aims to critically review the different design parameters that affect the flexural performance of the HRCBs and recognize the gaps of knowledge in the better design and more effective use of this construction system.

Keywords: Design parameters, experimental investigations, hollow reinforced concrete beams, steel, GFRP, flexural strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260
317 Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings

Authors: Tobias Teich, Danny Szendrei, Markus Schrader, Franziska Jahn, Susan Franke

Abstract:

The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.

Keywords: building automation, dynamic hydraulic balance, energy savings, VPN-networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
316 Transcritical CO2 Heat Pump Simulation Model and Validation for Simultaneous Cooling and Heating

Authors: Jahar Sarkar

Abstract:

In the present study, a steady-state simulation model has been developed to evaluate the system performance of a transcritical carbon dioxide heat pump system for simultaneous water cooling and heating. Both the evaporator (including both two-phase and superheated zone) and gas cooler models consider the highly variable heat transfer characteristics of CO2 and pressure drop. The numerical simulation model of transcritical CO2 heat pump has been validated by test data obtained from experiments on the heat pump prototype. Comparison between the test results and the model prediction for system COP variation with compressor discharge pressure shows a modest agreement with a maximum deviation of 15% and the trends are fairly similar. Comparison for other operating parameters also shows fairly similar deviation between the test results and the model prediction. Finally, the simulation results are presented to study the effects of operating parameters such as, temperature of heat exchanger fluid at the inlet, discharge pressure, compressor speed on system performance of CO2 heat pump, suitable in a dairy plant where simultaneous cooling at 4oC and heating at 73oC are required. Results show that good heat transfer properties of CO2 for both two-phase and supercritical region and efficient compression process contribute a lot for high system COPs.

Keywords: CO2 heat pump, dairy system, experiment, simulation model, validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
315 Thermal and Visual Performance of Solar Control Film

Authors: Norzita Jaafar, Nor Zaini Zakaria, Azni Zain Ahmed, Razidah Ismail

Abstract:

The use of solar control film on windows as one of solar passive strategies for building have becoming important and is gaining recognition. Malaysia located close to equator is having warm humid climate with long sunshine hours and abundant solar radiation throughout the year. Hence, befitting solar control on windows is absolutely necessary to capture the daylight whilst moderating thermal impact and eliminating glare problems. This is one of the energy efficient strategies to achieve thermal and visual comfort in buildings. Therefore, this study was carried out to investigate the effect of window solar controls on thermal and visual performance of naturally ventilated buildings. This was conducted via field data monitoring using a test building facility. Four types of window glazing systems were used with three types of solar control films. Data were analysed for thermal and visual impact with reference to thermal and optical characteristics of the films. Results show that for each glazing system, the surface temperature of windows are influenced by the Solar Energy Absorption property, the indoor air temperature are influenced by the Solar Energy Transmittance and Solar Energy Reflectance, and the daylighting by Visible Light Transmission and Shading Coefficient. Further investigations are underway to determine the mathematical relation between thermal energy and visual performance with the thermal and optical characteristics of solar control films.

Keywords: window, solar control film, natural ventilation, thermal performance, visual performance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2265