Search results for: instability torque ripples reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1912

Search results for: instability torque ripples reduction

82 Determination of Some Organochlorine Pesticide Residues in Vegetable and Soil Samples from Alau Dam and Gongulong Agricultural Sites, Borno State, North Eastern Nigeria

Authors: Joseph Clement Akan, Lami Jafiya, Zaynab Muhammad Chellube, Zakari Mohammed, Fanna Inna Abdulrahman

Abstract:

Five vegetables (spinach, lettuce, cabbage, tomato, and onion) were freshly harvested from the Alau Dam and Gongulong agricultural areas for the determination of some organochlorine pesticide residues (o, p-DDE, p,p’-DDD, o,p’-DDD, p,p’-DDT, α-BHC, γ-BHC, metoxichlor, lindane, endosulfan dieldrin, and aldrin.) Soil samples were also collected at different depths for the determination of the above pesticides. Samples collection and preparation were conducted using standard procedures. The concentrations of all the pesticides in the soil and vegetable samples were determined using GC/MS SHIMADZU (GC-17A) equipped with electron capture detector (ECD). The highest concentration was that of p,p’-DDD (132.4±13.45µg/g) which was observed in the leaf of cabbage, while the lowest concentration was that of p,p’-DDT (2.34µg/g) was observed in the root of spinach. Similar trends were observed at the Gongulong agricultural area, with p,p’-DDD having the highest concentration of 153.23µg/g in the leaf of cabbage, while the lowest concentration was that of p,p’-DDT (12.45µg/g) which was observed in the root of spinach. α-BHC, γ-BHC, Methoxychlor, and lindane were detected in all the vegetable samples studied. The concentrations of all the pesticides in the soil samples were observed to be higher at a depth of 21-30cm, while the lowest concentrations were observed at a depth of 0-10cm. The concentrations of all the pesticides in the vegetables and soil samples from the two agricultural sites were observed to be at alarming levels, much higher than the maximum residue limits (MRLs) and acceptable daily intake values (ADIs) .The levels of the pesticides observed in the vegetables and soil samples investigated, are of such a magnitude that calls for special attention and laws to regulate the use and circulation of such chemicals. Routine monitoring of pesticide residues in these study areas is necessary for the prevention, control and reduction of environmental pollution, so as to minimize health risks.

Keywords: Alau Dam, Gongulong, Organochlorine, Pesticide Residues, Soil, Vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3476
81 Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions

Authors: Daniel Schwarz, Jan Fivaz, Alessia Neuroni

Abstract:

Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.

Keywords: Comparison of political issue dimensions, digital awareness of candidates, digital policy space, party positions on digital issues.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
80 Study of the Energy Efficiency of Buildings under Tropical Climate with a View to Sustainable Development: Choice of Material Adapted to the Protection of the Environment

Authors: Guarry Montrose, Ted Soubdhan

Abstract:

In the context of sustainable development and climate change, the adaptation of buildings to the climatic context in hot climates is a necessity if we want to improve living conditions in housing and reduce the risks to the health and productivity of occupants due to thermal discomfort in buildings. One can find a wide variety of efficient solutions but with high costs. In developing countries, especially tropical countries, we need to appreciate a technology with a very limited cost that is affordable for everyone, energy efficient and protects the environment. Biosourced insulation is a product based on plant fibers, animal products or products from recyclable paper or clothing. Their development meets the objectives of maintaining biodiversity, reducing waste and protecting the environment. In tropical or hot countries, the aim is to protect the building from solar thermal radiation, a source of discomfort. The aim of this work is in line with the logic of energy control and environmental protection, the approach is to make the occupants of buildings comfortable, reduce their carbon dioxide emissions (CO2) and decrease their energy consumption (energy efficiency). We have chosen to study the thermo-physical properties of banana leaves and sawdust, especially their thermal conductivities, direct measurements were made using the flash method and the hot plate method. We also measured the heat flow on both sides of each sample by the hot box method. The results from these different experiences show that these materials are very efficient used as insulation. We have also conducted a building thermal simulation using banana leaves as one of the materials under Design Builder software. Air-conditioning load as well as CO2 release was used as performance indicator. When the air-conditioned building cell is protected on the roof by banana leaves and integrated into the walls with solar protection of the glazing, it saves up to 64.3% of energy and avoids 57% of CO2 emissions.

Keywords: Plant fibers, tropical climates, sustainable development, waste reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
79 Effects of Free-Hanging Horizontal Sound Absorbers on the Cooling Performance of Thermally Activated Building Systems

Authors: L. Marcos Domínguez, Nils Rage, Ongun B. Kazanci, Bjarne W. Olesen

Abstract:

Thermally Activated Building Systems (TABS) have proven to be an energy-efficient solution to provide buildings with an optimal indoor thermal environment. This solution uses the structure of the building to store heat, reduce the peak loads, and decrease the primary energy demand. TABS require the heated or cooled surfaces to be as exposed as possible to the indoor space, but exposing the bare concrete surfaces has a diminishing effect on the acoustic qualities of the spaces in a building. Acoustic solutions capable of providing optimal acoustic comfort and allowing the heat exchange between the TABS and the room are desirable. In this study, the effects of free-hanging units on the cooling performance of TABS and the occupants’ thermal comfort was measured in a full-scale TABS laboratory. Investigations demonstrate that the use of free-hanging sound absorbers are compatible with the performance of TABS and the occupant’s thermal comfort, but an appropriate acoustic design is needed to find the most suitable solution for each case. The results show a reduction of 11% of the cooling performance of the TABS when 43% of the ceiling area is covered with free-hanging horizontal sound absorbers, of 23% for 60% ceiling coverage ratio and of 36% for 80% coverage. Measurements in actual buildings showed an increase of the room operative temperature of 0.3 K when 50% of the ceiling surface is covered with horizontal panels and of 0.8 to 1 K for a 70% coverage ratio. According to numerical simulations using a new TRNSYS Type, the use of comfort ventilation has a considerable influence on the thermal conditions in the room; if the ventilation is removed, then the operative temperature increases by 1.8 K for a 60%-covered ceiling.

Keywords: Acoustic comfort, concrete core activation, full-scale measurements, thermally activated building systems, TRNSYS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
78 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: Dependency, story-cost, cost modes, engineering demand parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
77 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

Authors: Padmanabhan Balasubramanian, C. Hari Narayanan

Abstract:

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.

Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
76 Snails and Fish as Pollution Biomarkers in Lake Manzala and Laboratory C: Laboratory Exposed Snails to Chemical Mixtures

Authors: Hanaa M. M. El-Khayat, Hoda Abdel-Hamid, Kadria M. A. Mahmoud, Hanan S. Gaber, Hoda, M. A. Abu Taleb, Hassan E. Flefel

Abstract:

Snails are considered as suitable diagnostic organisms for heavy metal–contaminated sites. Biomphalaria alexandrina snails are used in this work as pollution bioindicators after exposure to chemical mixtures consisted of heavy metals (HM); zinc (Zn), copper (Cu) and lead (Pb); and persistent organic pollutants; Decabromodiphenyl ether 98% (D) and Aroclor 1254 (A). The impacts of these tested chemicals, individual and mixtures, on liver and kidney functions, antioxidant enzymes, complete blood picture, and tissue histology were studied. Results showed that Cu was proved to be the highly toxic against snails than Zn and Pb where LC50 values were 1.362, 213.198 and 277.396 ppm, respectively. Also, B. alexandrina snails exposed to the mixture of HM (¼ LC5 Cu, Pb and Zn) showed the highest bioaccumulation of Cu and Zn in their whole tissue, the most significant increase in AST, ALT & ALP activities and the highest significant levels of total protein, albumin and globulin. Results showed significant alterations in CAT activity in snail tissue extracts while snail samples exposed to most experimental tests showed significant increase in GST activity. Snail samples that exposed to HM mixtures showed a significant decrease in total hemocytes count while snail samples that exposed to mixtures containing A & D showed a significant increase in total hemocytes and Hyalinocytes. Histopathological alterations in snail samples exposed to individual HM and their mixtures for 4 weeks showed degeneration, edema, hyper trophy and vaculation in head-foot muscle, degeneration and necrotic changes in the digestive gland and accumulation in most tested organs. Also, the hermaphrodite gland showed mature ova with irregular shape and reduction in sperm number. In conclusion, the resulted damage and alterations in B. alexandrina studied parameters can be used as bioindicators to the presence of pollutants in its habitats.

Keywords: Biomphalaria, Zn, Cu, Pb, AST, ALT, ALP, total protein albumin, globulin, CAT and Histopathology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
75 Alleviation of Adverse Effects of Salt Stress on Soybean (Glycine max. L.) by Using Osmoprotectants and Organic Nutrients

Authors: Ayman El Sabagh, Sobhy Sorour, Abd Elhamid Omar, Adel Ragab, Mohammad Sohidul Islam, Celaleddin Barutçular, Akihiro Ueda, Hirofumi Saneoka

Abstract:

Salinity is one of the major factors limiting crop production in an arid environment. Despite its global importance soybean production suffer the problems of salinity stress causing damages at plant development. So it is implacable to either search for salinity enhancement of soybean plants. Therefore, in the current study we try to clarify the mechanism that might be involved in the ameliorating effects of osmo-protectants such as proline and glycine betaine as well as, compost application on soybean plants grown under salinity stress. The experiment was conducted under greenhouse conditions at the Graduate School of Biosphere Science Laboratory of Hiroshima University, Japan in 2011. The experiment was designed as a spilt-split plot based on randomized complete block design with four replications. The treatments could be summarized as follows; (i) salinity concentrations (0 and 15 mM), (ii) compost treatments (0 and 24 t ha-1) and (iii) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each. Results indicated that salinity stress induced reduction in growth and physiological aspects (dry weight per plant, chlorophyll content, N and K+ content) of soybean plant compared with those of the unstressed plants. On the other hand, salinity stress led to increases in the electrolyte leakage ratio, Na and proline contents. Special attention was paid to, the tolerance against salt stress was observed, the improvement of salt tolerance resulted from proline, glycine betaine and compost were accompanied with improved K+, and proline accumulation. While, significantly decreased electrolyte leakage ratio and Na+ content. These results clearly demonstrate that harmful effect of salinity could reduce on growth aspects of soybean. Consequently, exogenous osmoprotectants combine with compost will effectively solve seasonal salinity stress problem and are a good strategy to increase salinity resistance of soybean in the drylands.

Keywords: Compost, glycine betaine, growth, proline, salinity tolerance, soybean.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3228
74 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations

Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta

Abstract:

The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).

Keywords: Absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717
73 Enzyme Involvement in the Biosynthesis of Selenium Nanoparticles by Geobacillus wiegelii Strain GWE1 Isolated from a Drying Oven

Authors: Daniela N. Correa-Llantén, Sebastián A. Muñoz-Ibacache, Mathilde Maire, Jenny M. Blamey

Abstract:

The biosynthesis of nanoparticles by microorganisms, on the contrary to chemical synthesis, is an environmentally-friendly process which has low energy requirements. In this investigation, we used the microorganism Geobacillus wiegelii, strain GWE1, an aerobic thermophile belonging to genus Geobacillus, isolated from a drying oven. This microorganism has the ability to reduce selenite evidenced by the change of color from colorless to red in the culture. Elemental analysis and composition of the particles were verified using transmission electron microscopy and energy-dispersive X-ray analysis. The nanoparticles have a defined spherical shape and a selenium elemental state. Previous experiments showed that the presence of the whole microorganism for the reduction of selenite was not necessary. The results strongly suggested that an intracellular NADPH/NADH-dependent reductase mediates selenium nanoparticles synthesis under aerobic conditions. The enzyme was purified and identified by mass spectroscopy MALDI-TOF TOF technique. The enzyme is a 1-pyrroline-5-carboxylate dehydrogenase. Histograms of nanoparticles sizes were obtained. Size distribution ranged from 40-160 nm, where 70% of nanoparticles have less than 100 nm in size. Spectroscopic analysis showed that the nanoparticles are composed of elemental selenium. To analyse the effect of pH in size and morphology of nanoparticles, the synthesis of them was carried out at different pHs (4.0, 5.0, 6.0, 7.0, 8.0). For thermostability studies samples were incubated at different temperatures (60, 80 and 100 ºC) for 1 h and 3 h. The size of all nanoparticles was less than 100 nm at pH 4.0; over 50% of nanoparticles have less than 100 nm at pH 5.0; at pH 6.0 and 8.0 over 90% of nanoparticles have less than 100 nm in size. At neutral pH (7.0) nanoparticles reach a size around 120 nm and only 20% of them were less than 100 nm. When looking at temperature effect, nanoparticles did not show a significant difference in size when they were incubated between 0 and 3 h at 60 ºC. Meanwhile at 80 °C the nanoparticles suspension lost its homogeneity. A change in size was observed from 0 h of incubation at 80ºC, observing a size range between 40-160 nm, with 20% of them over 100 nm. Meanwhile after 3 h of incubation at size range changed to 60-180 nm with 50% of them over 100 nm. At 100 °C the nanoparticles aggregate forming nanorod structures. In conclusion, these results indicate that is possible to modulate size and shape of biologically synthesized nanoparticles by modulating pH and temperature.

Keywords: Genus Geobacillus, NADPH/NADH-dependent reductase, Selenium nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2308
72 Phelipanche ramosa (L. - Pomel) Control in Field Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L., Carriero F., Cibelli F., Raimondo M. L., Tarantino E.

Abstract:

The tomato is a very important crop, whose cultivation in the Mediterranean basin is severely affected by the phytoparasitic weed Phelipanche ramosa. The semiarid regions of the world are considered the main areas where this parasitic weed is established causing heavy infestation as it is able to produce high numbers of seeds (up to 500,000 per plant), which remain viable for extended period (more than 20 years). In this paper the results obtained from eleven treatments in order to control this parasitic weed including chemical, agronomic, biological and biotechnological methods compared with the untreated test under two plowing depths (30 and 50 cm) are reported. The split-plot design with 3 replicates was adopted. In 2014 a trial was performed in Foggia province (southern Italy) on processing tomato (cv Docet) grown in the field infested by Phelipanche ramosa. Tomato seedlings were transplant on May 5, on a clay-loam soil. During the growing cycle of the tomato crop, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each plot was detected. At tomato harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However, among the different methods tested, some of them which Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) under deeper plowing (50 cm depth) proved to mitigate the virulence of the Phelipanche ramose attacks. It is assumed that these effects can be improved combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: Control methods, Phelipanche ramosa, tomato crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
71 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
70 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: Air jet weaving, aerodynamic simulation, energy efficiency, experimental measurements, power costs, weft insertion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
69 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery

Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr

Abstract:

The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.

Keywords: Bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
68 An Experimental Investigation of Petrodiesel and Cotton Seed Biodiesel (CSOME) in Diesel Engine

Authors: P. V. Rao, Jaedaa Abdulhamid

Abstract:

Biodiesel is widely investigated to solve the twin problem of depletion of fossil fuel and environmental degradation. The main objective of the present work is to compare performance, emissions, and combustion characteristics of biodiesel derived from cotton seed oil in a diesel engine with the baseline results of petrodiesel fuel. Tests have been conducted on a single cylinder, four stroke CIDI diesel engine with a speed of 1500 rpm and a fixed compression ratio of 17.5 at different load conditions. The performance parameters evaluated include brake thermal efficiency, brake specific fuel consumption, brake power, indicated mean effective pressure, mechanical efficiency, and exhaust gas temperature. Regarding combustion study, cylinder pressure, rate of pressure rise, net heat release rate, cumulative heat release, mean gas temperature, mass fraction burned, and fuel line pressure were evaluated. The emission parameters such as carbon monoxide, carbon dioxide, un-burnt hydrocarbon, oxides of nitrogen, and smoke opacity were also measured by a smoke meter and an exhaust gas analyzer and compared with baseline results. The brake thermal efficiency of cotton seed oil methyl ester (CSOME) was lower than that of petrodiesel and brake specific fuel consumption was found to be higher. However, biodiesel resulted in the reduction of carbon dioxide, un-burnt hydrocarbon, and smoke opacity at the expense of nitrogen oxides. Carbon monoxide emissions for biodiesel was higher at maximum output power. It has been found that the combustion characteristics of cotton seed oil methyl ester closely followed those of standard petrodiesel. The experimental results suggested that biodiesel derived from cotton seed oil could be used as a good substitute to petrodiesel fuel in a conventional diesel without any modification.

Keywords: Diesel engine, Cotton seed, Biodiesel, performance, combustion, emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
67 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: Chronological sequence of discharges, recession curves, streamflow duration curves, water concession.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
66 Library Aware Power Conscious Realization of Complementary Boolean Functions

Authors: Padmanabhan Balasubramanian, C. Ardil

Abstract:

In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.

Keywords: Reed-Muller forms, Logic function, Hammingdistance, Algebraic factorization, Low power design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
65 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

Authors: Alexandru Epureanu, Virgil Teodor

Abstract:

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.

Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
64 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
63 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
62 Innovation in “Low-Tech” Industries: Portuguese Footwear Industry

Authors: António Marques, Graça Guedes

Abstract:

The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also, analyses the strategy follow in the innovation process and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. The Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.

Keywords: Footwear industry, innovation strategy, low-tech industry, Oslo Manual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
61 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1291
60 Circular Economy Maturity Models: A Systematic Literature Review

Authors: D. Kreutzer, S. Müller-Abdelrazeq, I. Isenhardt

Abstract:

Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation because this change affects not only production but also the entire company. Maturity models offer an approach to determine the current status of companies’ transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g., IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyze the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. For this purpose, circular economy maturity models at the company's (micro) level were identified from the literature, compared, and analyzed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the number and types of indicators as well as their metrics. For example, most models use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.

Keywords: Circular economy, maturity model, maturity assessment, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221
59 Pilot Scale Investigation on the Removal of Pollutants from Secondary Effluent to Meet Botswana Irrigation Standards Using Roughing and Slow Sand Filters

Authors: Moatlhodi Wise Letshwenyo, Lesedi Lebogang

Abstract:

Botswana is an arid country that needs to start reusing wastewater as part of its water security plan. Pilot scale slow sand filtration in combination with roughing filter was investigated for the treatment of effluent from Botswana International University of Science and Technology to meet Botswana irrigation standards. The system was operated at hydraulic loading rates of 0.04 m/hr and 0.12 m/hr. The results show that the system was able to reduce turbidity from 262 Nephelometric Turbidity Units to a range between 18 and 0 Nephelometric Turbidity Units which was below 30 Nephelometric Turbidity Units threshold limit. The overall efficacy ranged between 61% and 100%. Suspended solids, Biochemical Oxygen Demand, and Chemical Oxygen Demand removal efficiency averaged 42.6%, 45.5%, and 77% respectively and all within irrigation standards. Other physio-chemical parameters were within irrigation standards except for bicarbonate ion which averaged 297.7±44 mg L-1 in the influent and 196.22±50 mg L-1 in the effluent which was above the limit of 92 mg L-1, therefore averaging a reduction of 34.1% by the system. Total coliforms, fecal coliforms, and Escherichia coli in the effluent were initially averaging 1.1 log counts, 0.5 log counts, and 1.3 log counts respectively compared to corresponding influent log counts of 3.4, 2.7 and 4.1, respectively. As time passed, it was observed that only roughing filter was able to reach reductions of 97.5%, 86% and 100% respectively for faecal coliforms, Escherichia coli, and total coliforms. These organism numbers were observed to have increased in slow sand filter effluent suggesting multiplication in the tank. Water quality index value of 22.79 for the physio-chemical parameters suggests that the effluent is of excellent quality and can be used for irrigation purposes. However, the water quality index value for the microbial parameters (1820) renders the quality unsuitable for irrigation. It is concluded that slow sand filtration in combination with roughing filter is a viable option for the treatment of secondary effluent for reuse purposes. However, further studies should be conducted especially for the removal of microbial parameters using the system.

Keywords: Irrigation, roughing filter, slow sand filter, turbidity, water quality index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
58 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: Cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179
57 Steady State Rolling and Dynamic Response of a Tire at Low Frequency

Authors: Md Monir Hossain, Anne Staples, Kuya Takami, Tomonari Furukawa

Abstract:

Tire noise has a significant impact on ride quality and vehicle interior comfort, even at low frequency. Reduction of tire noise is especially important due to strict state and federal environmental regulations. The primary sources of tire noise are the low frequency structure-borne noise and the noise that originates from the release of trapped air between the tire tread and road surface during each revolution of the tire. The frequency response of the tire changes at low and high frequency. At low frequency, the tension and bending moment become dominant, while the internal structure and local deformation become dominant at higher frequencies. Here, we analyze tire response in terms of deformation and rolling velocity at low revolution frequency. An Abaqus FEA finite element model is used to calculate the static and dynamic response of a rolling tire under different rolling conditions. The natural frequencies and mode shapes of a deformed tire are calculated with the FEA package where the subspace-based steady state dynamic analysis calculates dynamic response of tire subjected to harmonic excitation. The analysis was conducted on the dynamic response at the road (contact point of tire and road surface) and side nodes of a static and rolling tire when the tire was excited with 200 N vertical load for a frequency ranging from 20 to 200 Hz. The results show that frequency has little effect on tire deformation up to 80 Hz. But between 80 and 200 Hz, the radial and lateral components of displacement of the road and side nodes exhibited significant oscillation. For the static analysis, the fluctuation was sharp and frequent and decreased with frequency. In contrast, the fluctuation was periodic in nature for the dynamic response of the rolling tire. In addition to the dynamic analysis, a steady state rolling analysis was also performed on the tire traveling at ground velocity with a constant angular motion. The purpose of the computation was to demonstrate the effect of rotating motion on deformation and rolling velocity with respect to a fixed Newtonian reference point. The analysis showed a significant variation in deformation and rolling velocity due to centrifugal and Coriolis acceleration with respect to a fixed Newtonian point on ground.

Keywords: Natural frequency, rotational motion, steady state rolling, subspace-based steady state dynamic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321
56 Photocatalytic Active Surface of LWSCC Architectural Concretes

Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky

Abstract:

Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.

Keywords: Photocatalytic concretes, titanium dioxide, architectural concretes, LWSCC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
55 TheAnalyzer: Clustering-Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human-Computer Interaction

Authors: D. S. A. Nanayakkara, K. J. P. G. Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. TheAnalyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling TheAnalyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: Data clustering, data standardization, dimensionality reduction, human-computer interaction, user profiling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228
54 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
53 The Impact of Financial System on Mixed Use Development – Unrest in UK and Sense of Safety in Mixed Use Development

Authors: Tamara Kelly

Abstract:

The past decade has witnessed a good opportunities for city development schemes in UK. The government encouraged restoration of city centers to comprise mixed use developments with high density residential apartments. Investments in regeneration areas were doing well according to the analyses of Property Databank (IPD). However, more recent analysis by IPD has shown that since 2007, property in regeneration areas has been more vulnerable to the market downturn than other types of investment property. The early stages of a property market downturn may be felt most in regeneration where funding, investor confidence and occupier demand would dissipate because the sector was considered more marginal or risky when development costs rise. Moreover, the Bank of England survey shows that lenders have sequentially tightened the availability of credit for commercial real estate since mid-2007. A sharp reduction in the willingness of banks to lend on commercial property was recorded. The credit crunch has already affected commercial property but its impact has been particularly severe in certain kinds of properties where residential developments are extremely difficult, in particular city centre apartments and buy-to-let markets. Commercial property – retail, industrial leisure and mixed use were also pressed, in Birmingham; tens of mixed use plots were built to replace old factories in the heart of the city. The purpose of these developments was to enable young professionals to work and live in same place. Thousands of people lost their jobs during the recession, moreover lending was more difficult and the future of many developments is unknown. The recession casts its shadow upon the society due to cuts in public spending by government, Inflation, rising tuition fees and high rise in unemployment generated anger and hatred was spreading among youth causing vandalism and riots in many cities. Recent riots targeted many mixed used development in the UK where banks, shops, restaurants and big stores were robbed and set into fire leaving residents with horror and shock. This paper examines the impact of the recession and riots on mixed use development in UK.

Keywords: Diversity, mixed use development, outdoor comfort, public realm, safe places, safety by design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622