Search results for: Carbon dioxide reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2294

Search results for: Carbon dioxide reduction

74 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: Dependency, story-cost, cost modes, engineering demand parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 985
73 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

Authors: Padmanabhan Balasubramanian, C. Hari Narayanan

Abstract:

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.

Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
72 New Suspension Mechanism Using Camber Thrust for a Formula Car

Authors: Shinji Kajiwara

Abstract:

The basic ability of a vehicle is to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle are vital in automotive engineering. The stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswinds and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since the fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced, thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle, especially with the worrying increase of vehicle collision every day. With better safety performance of a vehicle, every driver will be more confident driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved, thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in a four-wheel vehicle, especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff, and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on the performance of both suspension systems.

Keywords: Automobile, Camber Thrust, Cornering force, Suspension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3565
71 Snails and Fish as Pollution Biomarkers in Lake Manzala and Laboratory C: Laboratory Exposed Snails to Chemical Mixtures

Authors: Hanaa M. M. El-Khayat, Hoda Abdel-Hamid, Kadria M. A. Mahmoud, Hanan S. Gaber, Hoda, M. A. Abu Taleb, Hassan E. Flefel

Abstract:

Snails are considered as suitable diagnostic organisms for heavy metal–contaminated sites. Biomphalaria alexandrina snails are used in this work as pollution bioindicators after exposure to chemical mixtures consisted of heavy metals (HM); zinc (Zn), copper (Cu) and lead (Pb); and persistent organic pollutants; Decabromodiphenyl ether 98% (D) and Aroclor 1254 (A). The impacts of these tested chemicals, individual and mixtures, on liver and kidney functions, antioxidant enzymes, complete blood picture, and tissue histology were studied. Results showed that Cu was proved to be the highly toxic against snails than Zn and Pb where LC50 values were 1.362, 213.198 and 277.396 ppm, respectively. Also, B. alexandrina snails exposed to the mixture of HM (¼ LC5 Cu, Pb and Zn) showed the highest bioaccumulation of Cu and Zn in their whole tissue, the most significant increase in AST, ALT & ALP activities and the highest significant levels of total protein, albumin and globulin. Results showed significant alterations in CAT activity in snail tissue extracts while snail samples exposed to most experimental tests showed significant increase in GST activity. Snail samples that exposed to HM mixtures showed a significant decrease in total hemocytes count while snail samples that exposed to mixtures containing A & D showed a significant increase in total hemocytes and Hyalinocytes. Histopathological alterations in snail samples exposed to individual HM and their mixtures for 4 weeks showed degeneration, edema, hyper trophy and vaculation in head-foot muscle, degeneration and necrotic changes in the digestive gland and accumulation in most tested organs. Also, the hermaphrodite gland showed mature ova with irregular shape and reduction in sperm number. In conclusion, the resulted damage and alterations in B. alexandrina studied parameters can be used as bioindicators to the presence of pollutants in its habitats.

Keywords: Biomphalaria, Zn, Cu, Pb, AST, ALT, ALP, total protein albumin, globulin, CAT and Histopathology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204
70 Alleviation of Adverse Effects of Salt Stress on Soybean (Glycine max. L.) by Using Osmoprotectants and Organic Nutrients

Authors: Ayman El Sabagh, Sobhy Sorour, Abd Elhamid Omar, Adel Ragab, Mohammad Sohidul Islam, Celaleddin Barutçular, Akihiro Ueda, Hirofumi Saneoka

Abstract:

Salinity is one of the major factors limiting crop production in an arid environment. Despite its global importance soybean production suffer the problems of salinity stress causing damages at plant development. So it is implacable to either search for salinity enhancement of soybean plants. Therefore, in the current study we try to clarify the mechanism that might be involved in the ameliorating effects of osmo-protectants such as proline and glycine betaine as well as, compost application on soybean plants grown under salinity stress. The experiment was conducted under greenhouse conditions at the Graduate School of Biosphere Science Laboratory of Hiroshima University, Japan in 2011. The experiment was designed as a spilt-split plot based on randomized complete block design with four replications. The treatments could be summarized as follows; (i) salinity concentrations (0 and 15 mM), (ii) compost treatments (0 and 24 t ha-1) and (iii) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each. Results indicated that salinity stress induced reduction in growth and physiological aspects (dry weight per plant, chlorophyll content, N and K+ content) of soybean plant compared with those of the unstressed plants. On the other hand, salinity stress led to increases in the electrolyte leakage ratio, Na and proline contents. Special attention was paid to, the tolerance against salt stress was observed, the improvement of salt tolerance resulted from proline, glycine betaine and compost were accompanied with improved K+, and proline accumulation. While, significantly decreased electrolyte leakage ratio and Na+ content. These results clearly demonstrate that harmful effect of salinity could reduce on growth aspects of soybean. Consequently, exogenous osmoprotectants combine with compost will effectively solve seasonal salinity stress problem and are a good strategy to increase salinity resistance of soybean in the drylands.

Keywords: Compost, glycine betaine, growth, proline, salinity tolerance, soybean.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3188
69 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations

Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta

Abstract:

The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).

Keywords: Absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 670
68 Enzyme Involvement in the Biosynthesis of Selenium Nanoparticles by Geobacillus wiegelii Strain GWE1 Isolated from a Drying Oven

Authors: Daniela N. Correa-Llantén, Sebastián A. Muñoz-Ibacache, Mathilde Maire, Jenny M. Blamey

Abstract:

The biosynthesis of nanoparticles by microorganisms, on the contrary to chemical synthesis, is an environmentally-friendly process which has low energy requirements. In this investigation, we used the microorganism Geobacillus wiegelii, strain GWE1, an aerobic thermophile belonging to genus Geobacillus, isolated from a drying oven. This microorganism has the ability to reduce selenite evidenced by the change of color from colorless to red in the culture. Elemental analysis and composition of the particles were verified using transmission electron microscopy and energy-dispersive X-ray analysis. The nanoparticles have a defined spherical shape and a selenium elemental state. Previous experiments showed that the presence of the whole microorganism for the reduction of selenite was not necessary. The results strongly suggested that an intracellular NADPH/NADH-dependent reductase mediates selenium nanoparticles synthesis under aerobic conditions. The enzyme was purified and identified by mass spectroscopy MALDI-TOF TOF technique. The enzyme is a 1-pyrroline-5-carboxylate dehydrogenase. Histograms of nanoparticles sizes were obtained. Size distribution ranged from 40-160 nm, where 70% of nanoparticles have less than 100 nm in size. Spectroscopic analysis showed that the nanoparticles are composed of elemental selenium. To analyse the effect of pH in size and morphology of nanoparticles, the synthesis of them was carried out at different pHs (4.0, 5.0, 6.0, 7.0, 8.0). For thermostability studies samples were incubated at different temperatures (60, 80 and 100 ºC) for 1 h and 3 h. The size of all nanoparticles was less than 100 nm at pH 4.0; over 50% of nanoparticles have less than 100 nm at pH 5.0; at pH 6.0 and 8.0 over 90% of nanoparticles have less than 100 nm in size. At neutral pH (7.0) nanoparticles reach a size around 120 nm and only 20% of them were less than 100 nm. When looking at temperature effect, nanoparticles did not show a significant difference in size when they were incubated between 0 and 3 h at 60 ºC. Meanwhile at 80 °C the nanoparticles suspension lost its homogeneity. A change in size was observed from 0 h of incubation at 80ºC, observing a size range between 40-160 nm, with 20% of them over 100 nm. Meanwhile after 3 h of incubation at size range changed to 60-180 nm with 50% of them over 100 nm. At 100 °C the nanoparticles aggregate forming nanorod structures. In conclusion, these results indicate that is possible to modulate size and shape of biologically synthesized nanoparticles by modulating pH and temperature.

Keywords: Genus Geobacillus, NADPH/NADH-dependent reductase, Selenium nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
67 Phelipanche ramosa (L. - Pomel) Control in Field Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L., Carriero F., Cibelli F., Raimondo M. L., Tarantino E.

Abstract:

The tomato is a very important crop, whose cultivation in the Mediterranean basin is severely affected by the phytoparasitic weed Phelipanche ramosa. The semiarid regions of the world are considered the main areas where this parasitic weed is established causing heavy infestation as it is able to produce high numbers of seeds (up to 500,000 per plant), which remain viable for extended period (more than 20 years). In this paper the results obtained from eleven treatments in order to control this parasitic weed including chemical, agronomic, biological and biotechnological methods compared with the untreated test under two plowing depths (30 and 50 cm) are reported. The split-plot design with 3 replicates was adopted. In 2014 a trial was performed in Foggia province (southern Italy) on processing tomato (cv Docet) grown in the field infested by Phelipanche ramosa. Tomato seedlings were transplant on May 5, on a clay-loam soil. During the growing cycle of the tomato crop, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each plot was detected. At tomato harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However, among the different methods tested, some of them which Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) under deeper plowing (50 cm depth) proved to mitigate the virulence of the Phelipanche ramose attacks. It is assumed that these effects can be improved combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: Control methods, Phelipanche ramosa, tomato crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2510
66 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1110
65 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655
64 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: Air jet weaving, aerodynamic simulation, energy efficiency, experimental measurements, power costs, weft insertion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
63 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia

Authors: A. Belhaj Mohamed, M. Saidi, N. Boucherb, N. Ourtani, A. Soltani, I. Bouazizi, M. Ben Jrad

Abstract:

Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GCMS), capillary GC with flame-ionization detection, Compound Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (- 198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.

Keywords: Biomarkers, oil and gas seeps, organic geochemistry, source rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3396
62 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: Chronological sequence of discharges, recession curves, streamflow duration curves, water concession.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537
61 Library Aware Power Conscious Realization of Complementary Boolean Functions

Authors: Padmanabhan Balasubramanian, C. Ardil

Abstract:

In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.

Keywords: Reed-Muller forms, Logic function, Hammingdistance, Algebraic factorization, Low power design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
60 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

Authors: Alexandru Epureanu, Virgil Teodor

Abstract:

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.

Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
59 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608
58 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 591
57 Innovation in “Low-Tech” Industries: Portuguese Footwear Industry

Authors: António Marques, Graça Guedes

Abstract:

The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also, analyses the strategy follow in the innovation process and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. The Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.

Keywords: Footwear industry, innovation strategy, low-tech industry, Oslo Manual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
56 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
55 Circular Economy Maturity Models: A Systematic Literature Review

Authors: D. Kreutzer, S. Müller-Abdelrazeq, I. Isenhardt

Abstract:

Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation because this change affects not only production but also the entire company. Maturity models offer an approach to determine the current status of companies’ transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g., IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyze the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. For this purpose, circular economy maturity models at the company's (micro) level were identified from the literature, compared, and analyzed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the number and types of indicators as well as their metrics. For example, most models use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.

Keywords: Circular economy, maturity model, maturity assessment, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153
54 Pilot Scale Investigation on the Removal of Pollutants from Secondary Effluent to Meet Botswana Irrigation Standards Using Roughing and Slow Sand Filters

Authors: Moatlhodi Wise Letshwenyo, Lesedi Lebogang

Abstract:

Botswana is an arid country that needs to start reusing wastewater as part of its water security plan. Pilot scale slow sand filtration in combination with roughing filter was investigated for the treatment of effluent from Botswana International University of Science and Technology to meet Botswana irrigation standards. The system was operated at hydraulic loading rates of 0.04 m/hr and 0.12 m/hr. The results show that the system was able to reduce turbidity from 262 Nephelometric Turbidity Units to a range between 18 and 0 Nephelometric Turbidity Units which was below 30 Nephelometric Turbidity Units threshold limit. The overall efficacy ranged between 61% and 100%. Suspended solids, Biochemical Oxygen Demand, and Chemical Oxygen Demand removal efficiency averaged 42.6%, 45.5%, and 77% respectively and all within irrigation standards. Other physio-chemical parameters were within irrigation standards except for bicarbonate ion which averaged 297.7±44 mg L-1 in the influent and 196.22±50 mg L-1 in the effluent which was above the limit of 92 mg L-1, therefore averaging a reduction of 34.1% by the system. Total coliforms, fecal coliforms, and Escherichia coli in the effluent were initially averaging 1.1 log counts, 0.5 log counts, and 1.3 log counts respectively compared to corresponding influent log counts of 3.4, 2.7 and 4.1, respectively. As time passed, it was observed that only roughing filter was able to reach reductions of 97.5%, 86% and 100% respectively for faecal coliforms, Escherichia coli, and total coliforms. These organism numbers were observed to have increased in slow sand filter effluent suggesting multiplication in the tank. Water quality index value of 22.79 for the physio-chemical parameters suggests that the effluent is of excellent quality and can be used for irrigation purposes. However, the water quality index value for the microbial parameters (1820) renders the quality unsuitable for irrigation. It is concluded that slow sand filtration in combination with roughing filter is a viable option for the treatment of secondary effluent for reuse purposes. However, further studies should be conducted especially for the removal of microbial parameters using the system.

Keywords: Irrigation, roughing filter, slow sand filter, turbidity, water quality index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836
53 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: Cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
52 Steady State Rolling and Dynamic Response of a Tire at Low Frequency

Authors: Md Monir Hossain, Anne Staples, Kuya Takami, Tomonari Furukawa

Abstract:

Tire noise has a significant impact on ride quality and vehicle interior comfort, even at low frequency. Reduction of tire noise is especially important due to strict state and federal environmental regulations. The primary sources of tire noise are the low frequency structure-borne noise and the noise that originates from the release of trapped air between the tire tread and road surface during each revolution of the tire. The frequency response of the tire changes at low and high frequency. At low frequency, the tension and bending moment become dominant, while the internal structure and local deformation become dominant at higher frequencies. Here, we analyze tire response in terms of deformation and rolling velocity at low revolution frequency. An Abaqus FEA finite element model is used to calculate the static and dynamic response of a rolling tire under different rolling conditions. The natural frequencies and mode shapes of a deformed tire are calculated with the FEA package where the subspace-based steady state dynamic analysis calculates dynamic response of tire subjected to harmonic excitation. The analysis was conducted on the dynamic response at the road (contact point of tire and road surface) and side nodes of a static and rolling tire when the tire was excited with 200 N vertical load for a frequency ranging from 20 to 200 Hz. The results show that frequency has little effect on tire deformation up to 80 Hz. But between 80 and 200 Hz, the radial and lateral components of displacement of the road and side nodes exhibited significant oscillation. For the static analysis, the fluctuation was sharp and frequent and decreased with frequency. In contrast, the fluctuation was periodic in nature for the dynamic response of the rolling tire. In addition to the dynamic analysis, a steady state rolling analysis was also performed on the tire traveling at ground velocity with a constant angular motion. The purpose of the computation was to demonstrate the effect of rotating motion on deformation and rolling velocity with respect to a fixed Newtonian reference point. The analysis showed a significant variation in deformation and rolling velocity due to centrifugal and Coriolis acceleration with respect to a fixed Newtonian point on ground.

Keywords: Natural frequency, rotational motion, steady state rolling, subspace-based steady state dynamic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
51 TheAnalyzer: Clustering-Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human-Computer Interaction

Authors: D. S. A. Nanayakkara, K. J. P. G. Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. TheAnalyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling TheAnalyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: Data clustering, data standardization, dimensionality reduction, human-computer interaction, user profiling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166
50 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
49 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets Using an Open-Source Energy System Optimization Model

Authors: A. Balbo, G. Colucci, M. Nicoli, L. Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results are ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: Decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
48 The Impact of Financial System on Mixed Use Development – Unrest in UK and Sense of Safety in Mixed Use Development

Authors: Tamara Kelly

Abstract:

The past decade has witnessed a good opportunities for city development schemes in UK. The government encouraged restoration of city centers to comprise mixed use developments with high density residential apartments. Investments in regeneration areas were doing well according to the analyses of Property Databank (IPD). However, more recent analysis by IPD has shown that since 2007, property in regeneration areas has been more vulnerable to the market downturn than other types of investment property. The early stages of a property market downturn may be felt most in regeneration where funding, investor confidence and occupier demand would dissipate because the sector was considered more marginal or risky when development costs rise. Moreover, the Bank of England survey shows that lenders have sequentially tightened the availability of credit for commercial real estate since mid-2007. A sharp reduction in the willingness of banks to lend on commercial property was recorded. The credit crunch has already affected commercial property but its impact has been particularly severe in certain kinds of properties where residential developments are extremely difficult, in particular city centre apartments and buy-to-let markets. Commercial property – retail, industrial leisure and mixed use were also pressed, in Birmingham; tens of mixed use plots were built to replace old factories in the heart of the city. The purpose of these developments was to enable young professionals to work and live in same place. Thousands of people lost their jobs during the recession, moreover lending was more difficult and the future of many developments is unknown. The recession casts its shadow upon the society due to cuts in public spending by government, Inflation, rising tuition fees and high rise in unemployment generated anger and hatred was spreading among youth causing vandalism and riots in many cities. Recent riots targeted many mixed used development in the UK where banks, shops, restaurants and big stores were robbed and set into fire leaving residents with horror and shock. This paper examines the impact of the recession and riots on mixed use development in UK.

Keywords: Diversity, mixed use development, outdoor comfort, public realm, safe places, safety by design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
47 Effect of Soil Tillage System upon the Soil Properties, Weed Control, Quality and Quantity Yield in Some Arable Crops

Authors: T Rusu, P I Moraru, I Bogdan, A I Pop, M L Sopterean

Abstract:

The paper presents the influence of the conventional ploughing tillage technology in comparison with the minimum tillage, upon the soil properties, weed control and yield in the case of maize (Zea mays L.), soya-bean (Glycine hispida L.) and winter wheat (Triticum aestivum L.) in a three years crop rotation. A research has been conducted at the University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, Romania. The use of minimum soil tillage systems within a three years rotation: maize, soya-bean, wheat favorites the rise of the aggregates hydro stability with 5.6-7.5% on a 0-20 cm depth and 5-11% on 20-30 cm depth. The minimum soil tillage systems – paraplow, chisel or rotary grape – are polyvalent alternatives for basic preparation, germination bed preparation and sowing, for fields and crops with moderate loose requirements being optimized technologies for: soil natural fertility activation and rationalization, reduction of erosion, increasing the accumulation capacity for water and realization of sowing in the optimal period. The soil tillage system influences the productivity elements of cultivated species and finally the productions thus obtained. Thus, related to conventional working system, the productions registered in minimum tillage working represented 89- 97% in maize, 103-112% in soya-bean, 93-99% in winter-wheat. The results of investigations showed that the yield is a conclusion soil tillage systems influence on soil properties, plant density assurance and on weed control. Under minimum tillage systems in the case of winter weat as an option for replacing classic ploughing, the best results in terms of quality indices were obtained from version worked with paraplow, followed by rotary harrow and chisel. At variants worked with paraplow were obtained quality indices close to those of the variant worked with plow, and protein and gluten content was even higher. At Ariesan variety, highest protein content, 12.50% and gluten, 28.6% was obtained for the variant paraplow.

Keywords: Minimum tillage, soil properties, yields quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
46 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: Cascading hazards, multi-hazard, risk assessment, risk reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1047
45 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III

Authors: M. Okan Tasar

Abstract:

Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.

Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249