Search results for: product feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2593

Search results for: product feature extraction

103 Replacement of Commercial Anti-Corrosion Material with a More Effective and Cost Efficient Compound Based on Electrolytic System Simulation

Authors: Saeid Khajehmandali, Fattah Mollakarimi, Zohreh Seyf

Abstract:

There was a high rate of corrosion in Pyrolysis Gasoline Hydrogenation (PGH) unit of Arak Petrochemical Company (ARPC), and it caused some operational problem in this plant. A commercial chemical had been used as anti-corrosion in the depentanizer column overhead in order to control the corrosion rate. Injection of commercial corrosion inhibitor caused some operational problems such as fouling in some heat exchangers. It was proposed to replace this commercial material with another more effective trouble free, and well-known additive by R&D and operation specialists. At first, the system was simulated by commercial simulation software in electrolytic system to specify low pH points inside the plant. After a very comprehensive study of the situation and technical investigations ,ammonia / monoethanol amine solution was proposed as neutralizer or corrosion inhibitor to be injected in a suitable point of the plant. For this purpose, the depentanizer column and its accessories system was simulated again in case of this solution injection. According to the simulation results, injection of new anticorrosion substance has no any side effect on C5 cut product and operating conditions of the column. The corrosion rate will be cotrolled, if the pH remains at the range of 6.5 to 8 . Aactual plant test run was also carried out by injection of ammonia / monoethanol amine solution at the rate of 0.6 Kg/hr and the results of iron content of water samples and corrosion test coupons confirmed the simulation results. Now, ammonia / monoethanol amine solution is injected to a suitable pint inside the plant and corrosion rate has decreased significantly.

Keywords: Corrosion, Pyrolysis Gasoline, Simulation, Corrosion test copoun.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2321
102 Assessing Storage of Stability and Mercury Reduction of Freeze-Dried Pseudomonas putida within Different Types of Lyoprotectant

Authors: A. A. M. Azoddein, Y. Nuratri, A. B. Bustary, F. A. M. Azli, S. C. Sayuti

Abstract:

Pseudomonas putida is a potential strain in biological treatment to remove mercury contained in the effluent of petrochemical industry due to its mercury reductase enzyme that able to reduce ionic mercury to elementary mercury. Freeze-dried P. putida allows easy, inexpensive shipping, handling and high stability of the product. This study was aimed to freeze dry P. putida cells with addition of lyoprotectant. Lyoprotectant was added into the cells suspension prior to freezing. Dried P. putida obtained was then mixed with synthetic mercury. Viability of recovery P. putida after freeze dry was significantly influenced by the type of lyoprotectant. Among the lyoprotectants, tween 80/ sucrose was found to be the best lyoprotectant. Sucrose able to recover more than 78% (6.2E+09 CFU/ml) of the original cells (7.90E+09CFU/ml) after freeze dry and able to retain 5.40E+05 viable cells after 4 weeks storage in 4oC without vacuum. Polyethylene glycol (PEG) pre-treated freeze dry cells and broth pre-treated freeze dry cells after freeze-dry recovered more than 64% (5.0 E+09CFU/ml) and >0.1% (5.60E+07CFU/ml). Freeze-dried P. putida cells in PEG and broth cannot survive after 4 weeks storage. Freeze dry also does not really change the pattern of growth P. putida but extension of lag time was found 1 hour after 3 weeks of storage. Additional time was required for freeze-dried P. putida cells to recover before introduce freeze-dried cells to more complicated condition such as mercury solution. The maximum mercury reduction of PEG pre-treated freeze-dried cells after freeze dry and after storage 3 weeks was 56.78% and 17.91%. The maximum of mercury reduction of tween 80/sucrose pre-treated freeze-dried cells after freeze dry and after storage 3 weeks were 26.35% and 25.03%. Freeze dried P. putida was found to have lower mercury reduction compare to the fresh P. putida that has been growth in agar. Result from this study may be beneficial and useful as initial reference before commercialize freeze-dried P. putida.

Keywords: Pseudomonas putida, freeze-dry, PEG, Tween80/Sucrose, mercury, cell viability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067
101 Enhancing Warehousing Operations in Cold Supply Chain through the Use of IoT and LiFi Technologies

Authors: S. El-Gamal, P. Hossam, A. Abd El Aziz, R. Mahmoud, A. Hassan, D. Hilal, E. Ayman, H. Haytham, O. Khamis

Abstract:

Several concerns fall upon the supply chain especially in cold supply chains. These concerns are mainly in the distribution and storage phases. This research focuses on the storage area, which contains several activities such as the picking activity that faces a lot of obstacles and challenges. The implementation of IoT solutions enables businesses to monitor the temperature of food items, which is perhaps the most critical parameter in cold chains. Therefore, the research at hand proposes a practical solution that would help in eliminating the problems related to ineffective picking for products especially fish and seafood products by using IoT technology, most notably LiFi technology; thus, guaranteeing sufficient picking, reducing waste, and consequently lowering costs. A prototype was specially designed and examined. This research is a single case study research. Two methods of data collection were used; observation and semi-structured interviews. Semi-structured interviews were conducted with managers and a decision maker at one of the biggest retail stores Carrefour, Alexandria, Egypt to validate the problem and the proposed practical solution using IoT and LiFi technology. A total of three interviews were conducted. As a result, a SWOT analysis was achieved in order to highlight all the strengths and weaknesses of using the recommended LiFi solution in the picking process. According to the investigations, it was found that, the use of IoT and LiFi technology is cost effective, efficient, and reduces human errors, minimizes the percentage of product waste and thus saves money and cost. Therefore, increasing customer satisfaction and profits could be achieved.

Keywords: Cold supply chain, IoT, LiFi, warehousing operation, picking process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 395
100 3-D Numerical Simulation of Scraped Surface Heat Exchanger with Helical Screw

Authors: Rabeb Triki, Hassene Djemel, Mounir Baccar

Abstract:

Surface scraping is a passive heat transfer enhancement technique that is directly used in scraped surface heat exchanger (SSHE). The scraping action prevents the accumulation of the product on the inner wall, which intensifies the heat transfer and avoids the formation of dead zones. SSHEs are widely used in industry for several applications such as crystallization, sterilization, freezing, gelatinization, and many other continuous processes. They are designed to deal with products that are viscous, sticky or that contain particulate matter. This research work presents a three-dimensional numerical simulation of the coupled thermal and hydrodynamic behavior within a SSHE which includes Archimedes’ screw instead of scraper blades. The finite volume Fluent 15.0 was used to solve continuity, momentum and energy equations using multiple reference frame formulation. The process fluid investigated under this study is the pure glycerin. Different geometrical parameters were studied in the case of steady, non-isothermal, laminar flow. In particular, attention is focused on the effect of the conicity of the rotor and the pitch of Archimedes’ screw on temperature and velocity distribution and heat transfer rate. Numerical investigations show that the increase of the number of turns in the screw from five to seven turns leads to amelioration of heat transfer coefficient, and the increase of the conicity of the rotor from 0.1 to 0.15 leads to an increase in the rate of heat transfer. Further studies should investigate the effect of different operating parameters (axial and rotational Reynolds number) on the hydrodynamic and thermal behavior of the SSHE.

Keywords: ANSYS-Fluent, hydrodynamic behavior, SSHE, thermal behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867
99 Ingenious Use of Hypo Sludge in M25 Concrete

Authors: Abhinandan Singh Gill

Abstract:

Paper mill sludge is one of the major economic and environmental problems for paper and board industry, million tonnes quantity of sludge is produced in the world. It is essential to dispose these wastes safely without affecting health of human being, environment, fertile land; sources of water bodies, economy as it adversely affect the strength, durability and other properties of building materials based on them. Moreover, in developing countries like India where there is low availability of non-renewable resources and large need of building material like cement therefore it is essential to develop eco-efficient utilization of paper sludge. Primarily in functional terms paper sludge comprises of cellulose fibers, calcium carbonate, china clay, low silica, residual chemical bonds with water. The material is sticky and full of moisture content which is hard to dry. The manufacturing of paper usually produce loads of solid waste. These paper fibers are recycled in paper mills to limited number of times till they become weak to produce high quality paper. Thereafter, these left out small and weak pieces called as low quality paper fibers are detached out to become paper sludge. The material is by-product of de-inking and re-pulping of paper. This hypo sludge includes all kinds of inks, dyes, coating etc inscribed on the paper. This paper presents an overview of the published work on the use of hypo sludge in M25 concrete formulations as a supplementary cementitious material exploring its properties such as compressive strength, splitting and parameters like modulus of elasticity, density, applications and most importantly investigation of low cost concrete by using hypo sludge are presented.

Keywords: Concrete, sludge waste, hypo sludge, supplementary cementitious material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204
98 Application of Design Thinking for Technology Transfer of Remotely Piloted Aircraft Systems for the Creative Industry

Authors: V. Santamarina Campos, M. de Miguel Molina, B. de Miguel Molina, M. Á. Carabal Montagud

Abstract:

With this contribution, we want to show a successful example of the application of the Design Thinking methodology, in the European project 'Technology transfer of Remotely Piloted Aircraft Systems (RPAS) for the creative industry'. The use of this methodology has allowed us to design and build a drone, based on the real needs of prospective users. It has demonstrated that this is a powerful tool for generating innovative ideas in the field of robotics, by focusing its effectiveness on understanding and solving real user needs. In this way, with the support of an interdisciplinary team, comprised of creatives, engineers and economists, together with the collaboration of prospective users from three European countries, a non-linear work dynamic has been created. This teamwork has generated a sense of appreciation towards the creative industries, through continuously adaptive, inventive, and playful collaboration and communication, which has facilitated the development of prototypes. These have been designed to enable filming and photography in interior spaces, within 13 sectors of European creative industries: Advertising, Architecture, Fashion, Film, Antiques and Museums, Music, Photography, Televison, Performing Arts, Publishing, Arts and Crafts, Design and Software. Furthermore, it has married the real needs of the creative industries, with what is technologically and commercially viable. As a result, a product of great value has been obtained, which offers new business opportunities for small companies across this sector.

Keywords: Design thinking, design for effectiveness, methodology, active toolkit, storyboards, storytelling, PAR, focus group, innovation, RPAS, indoor drone, robotics, TRL, aerial film, creative industries, end-users.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
97 Identification of Igneous Intrusions in South Zallah Trough, Sirt Basin, Libya

Authors: Mohamed A. Saleem

Abstract:

Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, 12 igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and … S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in North-South profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.

Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350
96 Revisiting Domestication and Foreignisation Methods: Translating the Quran by the Hybrid Approach

Authors: Aladdin Al-Tarawneh

Abstract:

The Quran, as it is the sacred book of Islam and considered the literal word of God (Allah) in Arabic, is highly translated into many languages; however, the foreignising or the literal approach excessively stains the quality and discredits the final product in the eyes of its receptors. Such an approach fails to capture the intended meaning of the Quran and to communicate it in any language. Therefore, this study is conducted to propose a different approach that seeks involving other ones according to a hybrid model. Indeed, this study challenges the binary adherence that is highly used in Translation Studies (TS) in general and in the translation of the Quran in particular. Drawing on the genuine fact that the Quran can be communicated in any language in terms of meaning, and the translation is not sacred; this paper approaches the translation of the Quran by blending different methods like domestication or foreignisation in a systematic way, avoiding the binary choice made by many translators. To reach this aim, the paper has a conceptual part that seeks to elucidate and clarify the main methods employed in TS, and criticise and modify them to propose the new hybrid approach (the hybrid model) for translating the Quran – that is, the deductive method. To support and validate the outcome of the previous part, a comparative model is employed in order to highlight the differences between the suggested translation and other widely used ones – that is, the inductive method. By applying this methodology, the paper proves that there is a deficiency of communicating the original meaning of the Quran in light of the foreignising approach. In conclusion, the paper suggests producing a Quran translation has to take into account the adoption of many techniques to express the meaning of the Quran as understood in the original, and to offer this understanding in English in the most native-like manner to serve the intended target readers.

Keywords: Quran translation, hybrid approach, domestication, foreignisation, hybrid model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
95 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design

Authors: Padmanabhan Balasubramanian, Ryuta Arisaka

Abstract:

Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.

Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
94 Development and Evaluation of a Nutraceutical Herbal Summer Drink

Authors: Munish Garg, Vinni Ahuja

Abstract:

In the past few years, high consumption of soft drinks has attracted negative attention world-wide due to its possible adverse effects, leading the health conscious people to find alternative nutraceutical or herbal health drinks. In the present study, a nutraceutical soft drink was developed utilizing some easily available and well known traditional herbs having nutritional potential. The key ingredients were selected as bael, amla, lemon juice, ashwagandha and poppy seeds based on their household routine use in the summer with proven refreshing, cooling and energetic feeling since ages. After several trials made, the final composition of nutraceutical summer soft drink was selected as most suitable combination based on the taste, physicochemical, microbial and organoleptic point of view. The physicochemical analysis of the prepared drink found to contain optimum level of titratable acidity, total soluble solids and pH which were in accordance of the commercial recommendations. There were no bacterial colonies found in the product therefore found within limits. During the nine point’s hedonic scale sensory evaluation, the drink was strongly liked for colour, taste, flavour and texture. The formulation was found to contain flavonoids (80mg/100ml), phenolics (103mg/100ml), vitamin C (250mg/100ml) and has antioxidant potential (75.52%) apart from providing several other essential vitamins, minerals and healthy components. The developed nutraceutical drink provides an economical and feasible option for the consumers with very good taste combined with potential health benefits. The present drink is potentially capable to replace the synthetic soft drinks available in the market.

Keywords: Herbal drink, nutraceuticals, summer drink, antioxidant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3813
93 Temperature Susceptibility of Multigrade Bitumen Asphalt and an Approach to Account for Temperature Variation through Deep Pavements

Authors: Brody R. Clark, Chaminda Gallage, John Yeaman

Abstract:

Multigrade bitumen asphalt is a quality asphalt product that is not utilised in many places globally. Multigrade bitumen is believed to be less sensitive to temperature, which gives it an advantage over conventional binders. Previous testing has shown that asphalt temperature changes greatly with depth, but currently the industry standard is to nominate a single temperature for design. For detailed design of asphalt roads, perhaps asphalt layers should be divided into nominal layer depths and different modulus and fatigue equations/values should be used to reflect the temperatures of each respective layer. A collaboration of previous laboratory testing conducted on multigrade bitumen asphalt beams under a range of temperatures and loading conditions was analysed. The samples tested included 0% or 15% recycled asphalt pavement (RAP) to determine what impact the recycled material has on the fatigue life and stiffness of the pavement. This paper investigated the temperature susceptibility of multigrade bitumen asphalt pavements compared to conventional binders by combining previous testing that included conducting a sweep of fatigue tests, developing complex modulus master curves for each mix and a study on how pavement temperature changes through pavement depth. This investigation found that the final design of the pavement is greatly affected by the nominated pavement temperature and respective material properties. This paper has outlined a potential revision to the current design approach for asphalt pavements and proposes that further investigation is needed into pavement temperature and its incorporation into design.

Keywords: Asphalt, complex modulus, fatigue life, flexural stiffness, four-point bending, master curves, multigrade bitumen, thermal gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719
92 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: Code switching, L1 use, L2 teaching, Learners’ perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2440
91 Biodegradation of Malathion by Acinetobacter baumannii Strain AFA Isolated from Domestic Sewage in Egypt

Authors: Ahmed F. Azmy , Amal E. Saafan, Tamer M. Essam, Magdy A. Amin, Shaban H. Ahmed

Abstract:

Bacterial strains capable of degradation of malathion from the domestic sewage were isolated by an enrichment culture technique. Three bacterial strains were screened and identified as Acinetobacter baumannii (AFA), Pseudomonas aeruginosa (PS1), and Pseudomonas mendocina (PS2) based on morphological, biochemical identification and 16S rRNA sequence analysis. Acinetobacter baumannii AFA was the most efficient malathion degrading bacterium, so used for further biodegradation study. AFA was able to grow in mineral salt medium (MSM) supplemented with malathion (100 mg/l) as a sole carbon source, and within 14 days, 84% of the initial dose was degraded by the isolate measured by high performance liquid chromatography. Strain AFA could also degrade other organophosphorus compounds including diazinon, chlorpyrifos and fenitrothion. The effect of different culture conditions on the degradation of malathion like inoculum density, other carbon or nitrogen sources, temperature and shaking were examined. Degradation of malathion and bacterial cell growth were accelerated when culture media were supplemented with yeast extract, glucose and citrate. The optimum conditions for malathion degradation by strain AFA were; an inoculum density of 1.5x 10^12CFU/ml at 30°C with shaking. A specific polymerase chain reaction primers were designed manually using multiple sequence alignment of the corresponding carboxylesterase enzymes of Acinetobacter species. Sequencing result of amplified PCR product and phylogenetic analysis showed low degree of homology with the other carboxylesterase enzymes of Acinetobacter strains, so we suggested that this enzyme is a novel esterase enzyme. Isolated bacterial strains may have potential role for use in bioremediation of malathion contaminated.

Keywords: Acinetobacter baumannii, biodegradation, Malathion, organophosphate pesticides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3448
90 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 425
89 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: Embedded code generation, embedded C code quality, embedded systems, model-based development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
88 Orthogonal Array Application and Response Surface Method Approach for Optimal Product Values: An Application for Oil Blending Process

Authors: Christopher C. Ihueze, Constance C. Obiuto, Christian E. Okafor, Charles C. Okpala

Abstract:

This paper presents a methodical approach for designing and optimizing process parameters in oil blending industries. Twenty seven replicated experiments were conducted for production of A-Z crown super oil (SAE20W/50) employing L9 orthogonal array to establish process response parameters. Power law model was fitted to experimental data and the obtained model was optimized applying the central composite design (CCD) of response surface methodology (RSM). Quadratic model was found to be significant for production of A-Z crown supper oil. The study recognized and specified four new lubricant formulations that conform to ISO oil standard in the course of analyzing the batch productions of A-Z crown supper oil as: L1: KV = 21.8293Cst, BS200 = 9430.00Litres, Ad102=11024.00Litres, PVI = 2520 Litres, L2: KV = 22.513Cst, BS200 = 12430.00 Litres, Ad102 = 11024.00 Litres, PVI = 2520 Litres, L3: KV = 22.1671Cst, BS200 = 9430.00 Litres, Ad102 = 10481.00 Litres, PVI= 2520 Litres, L4: KV = 22.8605Cst, BS200 = 12430.00 Litres, Ad102 = 10481.00 Litres, PVI = 2520 Litres. The analysis of variance showed that quadratic model is significant for kinematic viscosity production while the R-sq value statistic of 0.99936 showed that the variation of kinematic viscosity is due to its relationship with the control factors. This study therefore resulted to appropriate blending proportions of lubricants base oil and additives and recommends the optimal kinematic viscosity of A-Z crown super oil (SAE20W/50) to be 22.86Cst.

Keywords: Additives, control factors, kinematic viscosity, lubricant, orthogonal array, process parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
87 Life Cycle Assessment of Residential Buildings: A Case Study in Canada

Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq

Abstract:

Residential buildings consume significant amounts of energy and produce large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH are found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.

Keywords: Building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5085
86 Effect of Fire Retardant Painting Product on Smoke Optical Density of Burning Natural Wood Samples

Authors: Abdullah N. Olimat, Ahmad S. Awad, Faisal M. AL-Ghathian

Abstract:

Natural wood is used in many applications in Jordan such as furniture, partitions constructions, and cupboards. Experimental work for smoke produced by the combustion of certain wood samples was studied. Smoke generated from burning of natural wood, is considered as a major cause of death in furniture fires. The critical parameter for life safety in fires is the available time for escape, so the visual obscuration due to smoke release during fire is taken into consideration. The effect of smoke, produced by burning of wood, depends on the amount of smoke released in case of fire. The amount of smoke production, apparently, affects the time available for the occupants to escape. To achieve the protection of life of building occupants during fire growth, fire retardant painting products are tested. The tested samples of natural wood include Beech, Ash, Beech Pine, and white Beech Pine. A smoke density chamber manufactured by fire testing technology has been used to perform measurement of smoke properties. The procedure of test was carried out according to the ISO-5659. A nonflammable vertical radiant heat flux of 25 kW/m2 is exposed to the wood samples in a horizontal orientation. The main objective of the current study is to carry out the experimental tests for samples of natural woods to evaluate the capability to escape in case of fire and the fire safety requirements. Specific optical density, transmittance, thermal conductivity, and mass loss are main measured parameters. Also, comparisons between samples with paint and with no paint are carried out between the selected samples of woods.

Keywords: Optical density, specific optical density, transmittance, visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1047
85 Biaxial Testing of Fabrics - A Comparison of Various Testing Methodologies

Authors: O.B. Ozipek, E. Bozdag, E. Sunbuloglu, A. Abdullahoglu, E. Belen, E. Celikkanat

Abstract:

In textile industry, besides the conventional textile products, technical textile goods, that have been brought external functional properties into, are being developed for technical textile industry. Especially these products produced with weaving technology are widely preferred in areas such as sports, geology, medical, automotive, construction and marine sectors. These textile products are exposed to various stresses and large deformations under typical conditions of use. At this point, sufficient and reliable data could not be obtained with uniaxial tensile tests for determination of the mechanical properties of such products due to mainly biaxial stress state. Therefore, the most preferred method is a biaxial tensile test method and analysis. These tests and analysis is applied to fabrics with different functional features in order to establish the textile material with several characteristics and mechanical properties of the product. Planar biaxial tensile test, cylindrical inflation and bulge tests are generally required to apply for textile products that are used in automotive, sailing and sports areas and construction industry to minimize accidents as long as their service life. Airbags, seat belts and car tires in the automotive sector are also subject to the same biaxial stress states, and can be characterized by same types of experiments. In this study, in accordance with the research literature related to the various biaxial test methods are compared. Results with discussions are elaborated mainly focusing on the design of a biaxial test apparatus to obtain applicable experimental data for developing a finite element model. Sample experimental results on a prototype system are expressed.

Keywords: Biaxial Stress, Bulge Test, Cylindrical Inflation, Fabric Testing, Planar Tension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4090
84 The Effect of Complementary Irrigation in Different Growth Stages on Yield, Qualitative and Quantitative Indices of the Two Wheat (Triticum aestivum L.) Cultivars in Mazandaran

Authors: Abbas Ghanbari-Malidarreh

Abstract:

In most wheat growing moderate regions and especially in the north of Iran climate, is affected grain filling by several physical and abiotic stresses. In this region, grain filling often occurs when temperatures are increasing and moisture supply is decreasing. The experiment was designed in RCBD with split plot arrangements with four replications. Four irrigation treatments included (I0) no irrigation (check); (I1) one irrigation (50 mm) at heading stage; (I2) two irrigation (100 mm) at heading and anthesis stage; and (I3) three irrigation (150 mm) at heading, anthesis and early grain filling growth stage, two wheat cultivars (Milan and Shanghai) were cultured in the experiment. Totally raining was 453 mm during the growth season. The result indicated that biological yield, grain yield and harvest index were significantly affected by irrigation levels. I3 treatment produced more tillers number in m2, fertile tillers number in m2, harvest index and biological yield. Milan produced more tillers number in m2, fertile tillers in m2, while Shanghai produced heavier tillers and grain 1000 weight. Plant height was significant in wheat varieties while were not statistically significant in irrigation levels. Milan produced more grain yield, harvest index and biological yield. Grain yield shown that I1, I2, and I3 produced increasing of 5228 (21%), 5460 (27%) and 5670 (29%) kg ha-1, respectively. There was an interaction of irrigation and cultivar on grain yields. In the absence of the irrigation reduced grain 1000 weight from 45 to 40 g. No irrigation reduced soil moisture extraction during the grain filling stage. Current assimilation as a source of carbon for grain filling depends on the light intercepting viable green surfaces of the plant after anthesis that due to natural senescence and the effect of various stresses. At the same time the demand by the growing grain is increasing. It is concluded from research work that wheat crop irrigated Milan cultivar could increase the grain yield in comparison with Shanghai cultivar. Although, the grain yield of Shanghai under irrigation was slightly lower than Milan. This grain yield also was related to weather condition, sowing date, plant density and location conditions and management of fertilizers, because there was not significant difference in biological and straw yield. The best result was produced by I1 treatment. I2 and I3 treatments were not significantly difference with I1 treatment. Grain yield of I1 indicated that wheat is under soil moisture deficiency. Therefore, I1 irrigation was better than I0.

Keywords: anthesis, grain yield, irrigation, supplementary, Wheat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
83 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 690
82 Removal of Polycyclic Aromatic Hydrocarbons Present in Tyre Pyrolytic Oil Using Low Cost Natural Adsorbents

Authors: Neha Budhwani

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are formed during the pyrolysis of scrap tyres to produce tyre pyrolytic oil (TPO). Due to carcinogenic, mutagenic, and toxic properties PAHs are priority pollutants. Hence it is essential to remove PAHs from TPO before utilising TPO as a petroleum fuel alternative (to run the engine). Agricultural wastes have promising future to be utilized as biosorbent due to their cost effectiveness, abundant availability, high biosorption capacity and renewability. Various low cost adsorbents were prepared from natural sources. Uptake of PAHs present in tyre pyrolytic oil was investigated using various low-cost adsorbents of natural origin including sawdust (shisham), coconut fiber, neem bark, chitin, activated charcoal. Adsorption experiments of different PAHs viz. naphthalene, acenaphthalene, biphenyl and anthracene have been carried out at ambient temperature (25°C) and at pH 7. It was observed that for any given PAH, the adsorption capacity increases with the lignin content. Freundlich constant Kf and 1/n have been evaluated and it was found that the adsorption isotherms of PAHs were in agreement with a Freundlich model, while the uptake capacity of PAHs followed the order: activated charcoal> saw dust (shisham) > coconut fiber > chitin. The partition coefficients in acetone-water, and the adsorption constants at equilibrium, could be linearly correlated with octanol–water partition coefficients. It is observed that natural adsorbents are good alternative for PAHs removal. Sawdust of Dalbergia sissoo, a by-product of sawmills was found to be a promising adsorbent for the removal of PAHs present in TPO. It is observed that adsorbents studied were comparable to those of some conventional adsorbents.

Keywords: Acenaphthene, anthracene, biphenyl, Coconut fiber, naphthalene, natural adsorbent, PAHs, TPO and wood powder (shisham).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4012
81 Controlled Vocabularies and Information Retrieval: 1918 Pandemic’s Scientific Literature as an Example

Authors: M. Garcia-Alsina, J. Cobarsí

Abstract:

The role of controlled vocabularies in information retrieval is broadly recognized as a relevant feature. Besides, there is a standing demand that editors and databases should consider the effective introduction of controlled vocabularies in their procedures to index scientific literature. That is especially important because information retrieval is pointed out as a significant point to drive systematic literature review. Hence, a first question emerges: Are the controlled vocabularies at this moment considered? On the other hand, subject searching in the catalogs is complex mainly due to the dichotomy between keywords from authors versus keywords based on controlled vocabularies. Finally, there is some demand to unify the terminology related to health to make easier the medical history exploitation and research. Considering these features, this paper focuses on controlled vocabularies related to the health field and their role for storing, classifying, and retrieving relevant literature. The objective is knowing which role plays the controlled vocabularies related to the health field to index and retrieve research literature in data bases such as Web of Science (WoS) and Scopus. So, this exploratory research is grounded over two research questions: 1) Which are the terms considered in specific controlled vocabularies of the health field; and 2) How papers are indexed in relevant databases to be easily retrieved, considering keywords vs specific health’ controlled vocabularies? This research takes as fieldwork the controlled vocabularies related to health and the scientific interest for 1918 flu pandemic, also known equivocally as ‘Spanish flu’. This interest has been fostered by the emergence in the early 21st of epidemics of pneumonic diseases caused by virus. Searches about and with controlled vocabularies on WoS and Scopus databases are conducted. First results of this work in progress are surprising. There are different controlled vocabularies for the health field, into which the terms collected and preferred related to ‘1918 pandemic’ are identified. To summarize, ‘Spanish influenza epidemic’ or ‘Spanish flu’ are collected as not preferred terms. The preferred terms are: ‘influenza’ or ‘influenza pandemic, 1918-1919’. Although the controlled vocabularies are clear in their election, most of the literature about ‘1918 pandemic’ is retrievable either by ‘Spanish’ or by ‘1918’ disjunct, and the dominant word to retrieve literature is ‘Spanish’ rather than ‘1918’. This is surprising considering the existence of suitable controlled vocabularies related to health topics, and the modern guidelines of World Health Organization concerning naming of diseases that point out to other preferred terms. A first conclusion is the failure of using controlled vocabularies for a field such as health, and in consequence for WoS and Scopus. This research opens further research questions about which is the role that controlled vocabularies play in the instructions to authors that journals deliver to documents’ authors.

Keywords: Controlled vocabularies, indexing, 1918 influenza, information retrieval, keywords, 1918 pandemic, scientific databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360
80 Carbamazepine Co-crystal Screening with Dicarboxylic Acids Co-Crystal Formers

Authors: Syarifah Abd Rahim, Fatinah Ab Rahman, Engku N. E. M. Nasir, Noor A. Ramle

Abstract:

Co-crystal is believed to improve the solubility and dissolution rates and thus, enhanced the bioavailability of poor water soluble drugs particularly during the oral route of administration. With the existing of poorly soluble drugs in pharmaceutical industry, the screening of co-crystal formation using carbamazepine (CBZ) as a model drug compound with dicarboxylic acids co-crystal formers (CCF) namely fumaric (FA) and succinic (SA) acids in ethanol has been studied. The co-crystal formations were studied by varying the mol ratio values of CCF to CBZ to access the effect of CCF concentration on the formation of the co-crystal. Solvent evaporation, slurry and cooling crystallization which representing the solution based method co-crystal screening were used. Based on the differential scanning calorimetry (DSC) analysis, the melting point of CBZ-SA in different ratio was in the range between 188oC-189oC. For CBZ-FA form A and CBZ-FA form B the melting point in different ratio were in the range of 174oC-175oC and 185oC-186oC respectively. The product crystal from the screening was also characterized using X-ray powder diffraction (XRPD). The XRPD pattern profile analysis has shown that the CBZ co-crystals with FA and SA were successfully formed for all ratios studied. The findings revealed that CBZ-FA co-crystal were formed in two different polymorphs. It was found that CBZ-FA form A and form B were formed from evaporation and slurry crystallization methods respectively. On the other hand, in cooling crystallization method, CBZ-FA form A was formed at lower mol ratio of CCF to CBZ and vice versa. This study disclosed that different methods and mol ratios during the co-crystal screening can affect the outcome of co-crystal produced such as polymorphic forms of co-crystal and thereof. Thus, it was suggested that careful attentions is needed during the screening since the co-crystal formation is currently one of the promising approach to be considered in research and development for pharmaceutical industry to improve the poorly soluble drugs.

Keywords: Carbamazepine, co-crystal, co-crystal former, dicarboxylic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2864
79 Substitution of Phosphate with Liquid Smoke as a Binder on the Quality of Chicken Nugget

Authors: E. Abustam, M. Yusuf, M. I. Said

Abstract:

One of functional properties of the meat is decrease of water holding capacity (WHC) during rigor mortis. At the time of pre-rigor, WHC is higher than post-rigor. The decline of WHC has implication to the other functional properties such as decreased cooking lost and yields resulting in lower elasticity and compactness of processed meat product. In many cases, the addition of phosphate in the meat will increase the functional properties of the meat such as WHC. Furthermore, liquid smoke has also been known in increasing the WHC of fresh meat. For food safety reasons, liquid smoke in the present study was used as a substitute to phosphate in production of chicken nuggets. This study aimed to know the effect of substitution of phosphate with liquid smoke on the quality of nuggets made from post-rigor chicken thigh and breast. The study was arranged using completely randomized design of factorial pattern 2x3 with three replications. Factor 1 was thigh and breast parts of the chicken, and factor 2 was different levels of liquid smoke in substitution to phosphate (0%, 50%, and 100%). The thigh and breast post-rigor broiler aged 40 days were used as the main raw materials in making nuggets. Auxiliary materials instead of meat were phosphate, liquid smoke at concentration of 10%, tapioca flour, salt, eggs and ice. Variables measured were flexibility, shear force value, cooking loss, elasticity level, and preferences. The results of this study showed that the substitution of phosphate with 100% liquid smoke resulting high quality nuggets. Likewise, the breast part of the meat showed higher quality nuggets than thigh part. This is indicated by high elasticity, low shear force value, low cooking loss, and a high level of preference of the nuggets. It can be concluded that liquid smoke can be used as a binder in making nuggets of chicken post-rigor.

Keywords: Liquid smoke, nugget quality, phosphate, post-rigor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
78 Bioleaching for Efficient Copper Ore Recovery

Authors: Zh. Karaulova, D. Baizhigitov

Abstract:

At the Aktogay deposit, the oxidized ore section has been developed since 2015; by now, the reserves of easily enriched ore are decreasing, and a large number of copper-poor, difficult-to-enrich ores has been accumulated in the dumps of the KAZ Minerals Aktogay deposit, which is unprofitable to mine using the traditional mining methods. Hence, another technology needs to be implemented, which will significantly expand the raw material base of copper production in Kazakhstan and ensure the efficient use of natural resources. Heap and dump bacterial recovery are the most acceptable technologies for processing low-grade secondary copper sulfide ores. Test objects were the copper ores of Aktogay deposit and chemolithotrophic bacteria Leptospirillum ferrooxidans (L.f.), Acidithiobacillus caldus (A.c.), Sulfobacillus acidophilus (S.a.), represent mixed cultures utilized in bacterial oxidation systems. They can stay active in the 20-40 °C temperature range. Biocatalytic acceleration was achieved as a result of bacteria oxidizing iron sulfides to form iron sulfate, which subsequently underwent chemical oxidation to become sulfate oxide. The following results have been achieved at the initial stage: the goal was to grow and maintain the life activity of bacterial cultures under laboratory conditions. These bacteria grew the best within the pH 1,2-1,8 range with light stirring and in an aerated environment. The optimal growth temperature was 30-33 оC. The growth rate decreased by one-half for each 4-5 °C fall in temperature from 30 °C. At best, the number of bacteria doubled every 24 hours. Typically, the maximum concentration of cells that can be grown in ferrous solution is about 107/ml. A further step researched in this case was the adaptation of microorganisms to the environment of certain metals. This was followed by mass production of inoculum and maintenance for their further cultivation on a factory scale. This was done by adding sulfide concentrate, allowing the bacteria to convert the ferrous sulfate as indicated by the Eh (> 600 mV), then diluting to double the volume and adding concentrate to achieve the same metal level. This process was repeated until the desired metal level and volumes were achieved. The final stage of bacterial recovery was the transportation and irrigation of secondary sulfide copper ores of the oxidized ore section. In conclusion, the project was implemented at the Aktogay mine since the bioleaching process was prolonged. Besides, the method of bacterial recovery might compete well with existing non-biological methods of extraction of metals from ores.

Keywords: Bacterial recovery, copper ore, bioleaching, bacterial inoculum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56
77 Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement

Authors: S. Homrossukon, A. Anurathapunt

Abstract:

This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.

Keywords: Six Sigma Solutions, Process Improvement, QualityManagement, Benefit Cost Ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084
76 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836
75 21st Century Biotechnological Research and Development Advancements for Industrial Development in India

Authors: Monisha Isaac

Abstract:

Biotechnology is a discipline which explains the use of living organisms and systems to construct a product, or we can define it as an application or technology developed to use biological systems and organisms processes for a specific use. Particularly, it includes cells and its components use for new technologies and inventions. The tools developed can be further used in diverse fields such as agriculture, industry, research and hospitals etc. The 21st century has seen a drastic development and advancement in biotechnology in India. Significant increase in Government of India’s outlays for biotechnology over the past decade has been observed. A sectoral break up of biotechnology-based companies in India shows that most of the companies are agriculture-based companies having interests ranging from tissue culture to biopesticides. Major attention has been given by the companies in health related activities and in environmental biotechnology. The biopharmaceutical, which comprises of vaccines, diagnostic, and recombinant products is the most reliable and largest segment of the Indian Biotech industry. India has developed its vaccine markets and supplies them to various countries. Then there are the bio-services, which mainly comprise of contract researches and manufacturing services. India has made noticeable developments in the field of bio industries including manufacturing of enzymes, biofuels and biopolymers. Biotechnology is also playing a crucial and significant role in the field of agriculture. Traditional methods have been replaced by new technologies that mainly focus on GM crops, marker assisted technologies and the use of biotechnological tools to improve the quality of fertilizers and soil. It may only be a small contributor but has shown to have huge potential for growth. Bioinformatics is a computational method which helps to store, manage, arrange and design tools to interpret the extensive data gathered through experimental trials, making it important in the design of drugs.

Keywords: Biotechnology, advancement, agriculture, bio-services, bio-industries, bio-pharmaceuticals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
74 The Application of Line Balancing Technique and Simulation Program to Increase Productivity in Hard Disk Drive Components

Authors: Alonggot Limcharoen, Jintana Wannarat, Vorawat Panich

Abstract:

This study aims to investigate the balancing of the number of operators (Line Balancing technique) in the production line of hard disk drive components in order to increase efficiency. At present, the trend of using hard disk drives has continuously declined leading to limits in a company’s revenue potential. It is important to improve and develop the production process to create market share and to have the ability to compete with competitors with a higher value and quality. Therefore, an effective tool is needed to support such matters. In this research, the Arena program was applied to analyze the results both before and after the improvement. Finally, the precedent was used before proceeding with the real process. There were 14 work stations with 35 operators altogether in the RA production process where this study was conducted. In the actual process, the average production time was 84.03 seconds per product piece (by timing 30 times in each work station) along with a rating assessment by implementing the Westinghouse principles. This process showed that the rating was 123% underlying an assumption of 5% allowance time. Consequently, the standard time was 108.53 seconds per piece. The Takt time was calculated from customer needs divided by working duration in one day; 3.66 seconds per piece. Of these, the proper number of operators was 30 people. That meant five operators should be eliminated in order to increase the production process. After that, a production model was created from the actual process by using the Arena program to confirm model reliability; the outputs from imitation were compared with the original (actual process) and this comparison indicated that the same output meaning was reliable. Then, worker numbers and their job responsibilities were remodeled into the Arena program. Lastly, the efficiency of production process enhanced from 70.82% to 82.63% according to the target.

Keywords: Hard disk drive, line balancing, simulation, Arena program.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138