Search results for: ultimate load reduction
81 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.
Keywords: Dependency, story-cost, cost modes, engineering demand parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101880 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting are an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers, and helical piers, jet grouted mortar columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with the limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, the micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: 1. Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. 2. For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in the slow rate. 3. If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. 4. Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: Differential settlement, micro-tunnel, soil-structure interaction, tilted structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 277079 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power
Authors: Padmanabhan Balasubramanian, C. Hari Narayanan
Abstract:
This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152078 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature
Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi
Abstract:
The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.
Keywords: Hardness, powder metallurgy, Spark plasma sintering, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158077 Snails and Fish as Pollution Biomarkers in Lake Manzala and Laboratory C: Laboratory Exposed Snails to Chemical Mixtures
Authors: Hanaa M. M. El-Khayat, Hoda Abdel-Hamid, Kadria M. A. Mahmoud, Hanan S. Gaber, Hoda, M. A. Abu Taleb, Hassan E. Flefel
Abstract:
Snails are considered as suitable diagnostic organisms for heavy metal–contaminated sites. Biomphalaria alexandrina snails are used in this work as pollution bioindicators after exposure to chemical mixtures consisted of heavy metals (HM); zinc (Zn), copper (Cu) and lead (Pb); and persistent organic pollutants; Decabromodiphenyl ether 98% (D) and Aroclor 1254 (A). The impacts of these tested chemicals, individual and mixtures, on liver and kidney functions, antioxidant enzymes, complete blood picture, and tissue histology were studied. Results showed that Cu was proved to be the highly toxic against snails than Zn and Pb where LC50 values were 1.362, 213.198 and 277.396 ppm, respectively. Also, B. alexandrina snails exposed to the mixture of HM (¼ LC5 Cu, Pb and Zn) showed the highest bioaccumulation of Cu and Zn in their whole tissue, the most significant increase in AST, ALT & ALP activities and the highest significant levels of total protein, albumin and globulin. Results showed significant alterations in CAT activity in snail tissue extracts while snail samples exposed to most experimental tests showed significant increase in GST activity. Snail samples that exposed to HM mixtures showed a significant decrease in total hemocytes count while snail samples that exposed to mixtures containing A & D showed a significant increase in total hemocytes and Hyalinocytes. Histopathological alterations in snail samples exposed to individual HM and their mixtures for 4 weeks showed degeneration, edema, hyper trophy and vaculation in head-foot muscle, degeneration and necrotic changes in the digestive gland and accumulation in most tested organs. Also, the hermaphrodite gland showed mature ova with irregular shape and reduction in sperm number. In conclusion, the resulted damage and alterations in B. alexandrina studied parameters can be used as bioindicators to the presence of pollutants in its habitats.Keywords: Biomphalaria, Zn, Cu, Pb, AST, ALT, ALP, total protein albumin, globulin, CAT and Histopathology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124476 Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s system. Naturally exchanged patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s system. The Probabilistic Risk Assessment (PRA) technique is utilized to assess the safety of an industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA-safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and rural areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is predicted for multiple factors distribution schemes of multi-criteria analysis. The input–output analysis is explored from the spillover effect, and we conducted Monte Carlo simulations for sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the composite index for biosphere with collective structure of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in an artistic/architectural perspective. The hypothesis is deployed to unify analytic and analogical spatial structure in development urban environments using optimization loads as an example of integrated industrial structure where the process is based on engineering topology of systems ecology.
Keywords: Spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25675 Alleviation of Adverse Effects of Salt Stress on Soybean (Glycine max. L.) by Using Osmoprotectants and Organic Nutrients
Authors: Ayman El Sabagh, Sobhy Sorour, Abd Elhamid Omar, Adel Ragab, Mohammad Sohidul Islam, Celaleddin Barutçular, Akihiro Ueda, Hirofumi Saneoka
Abstract:
Salinity is one of the major factors limiting crop production in an arid environment. Despite its global importance soybean production suffer the problems of salinity stress causing damages at plant development. So it is implacable to either search for salinity enhancement of soybean plants. Therefore, in the current study we try to clarify the mechanism that might be involved in the ameliorating effects of osmo-protectants such as proline and glycine betaine as well as, compost application on soybean plants grown under salinity stress. The experiment was conducted under greenhouse conditions at the Graduate School of Biosphere Science Laboratory of Hiroshima University, Japan in 2011. The experiment was designed as a spilt-split plot based on randomized complete block design with four replications. The treatments could be summarized as follows; (i) salinity concentrations (0 and 15 mM), (ii) compost treatments (0 and 24 t ha-1) and (iii) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each. Results indicated that salinity stress induced reduction in growth and physiological aspects (dry weight per plant, chlorophyll content, N and K+ content) of soybean plant compared with those of the unstressed plants. On the other hand, salinity stress led to increases in the electrolyte leakage ratio, Na and proline contents. Special attention was paid to, the tolerance against salt stress was observed, the improvement of salt tolerance resulted from proline, glycine betaine and compost were accompanied with improved K+, and proline accumulation. While, significantly decreased electrolyte leakage ratio and Na+ content. These results clearly demonstrate that harmful effect of salinity could reduce on growth aspects of soybean. Consequently, exogenous osmoprotectants combine with compost will effectively solve seasonal salinity stress problem and are a good strategy to increase salinity resistance of soybean in the drylands.Keywords: Compost, glycine betaine, growth, proline, salinity tolerance, soybean.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 322974 Enzyme Involvement in the Biosynthesis of Selenium Nanoparticles by Geobacillus wiegelii Strain GWE1 Isolated from a Drying Oven
Authors: Daniela N. Correa-Llantén, Sebastián A. Muñoz-Ibacache, Mathilde Maire, Jenny M. Blamey
Abstract:
The biosynthesis of nanoparticles by microorganisms, on the contrary to chemical synthesis, is an environmentally-friendly process which has low energy requirements. In this investigation, we used the microorganism Geobacillus wiegelii, strain GWE1, an aerobic thermophile belonging to genus Geobacillus, isolated from a drying oven. This microorganism has the ability to reduce selenite evidenced by the change of color from colorless to red in the culture. Elemental analysis and composition of the particles were verified using transmission electron microscopy and energy-dispersive X-ray analysis. The nanoparticles have a defined spherical shape and a selenium elemental state. Previous experiments showed that the presence of the whole microorganism for the reduction of selenite was not necessary. The results strongly suggested that an intracellular NADPH/NADH-dependent reductase mediates selenium nanoparticles synthesis under aerobic conditions. The enzyme was purified and identified by mass spectroscopy MALDI-TOF TOF technique. The enzyme is a 1-pyrroline-5-carboxylate dehydrogenase. Histograms of nanoparticles sizes were obtained. Size distribution ranged from 40-160 nm, where 70% of nanoparticles have less than 100 nm in size. Spectroscopic analysis showed that the nanoparticles are composed of elemental selenium. To analyse the effect of pH in size and morphology of nanoparticles, the synthesis of them was carried out at different pHs (4.0, 5.0, 6.0, 7.0, 8.0). For thermostability studies samples were incubated at different temperatures (60, 80 and 100 ºC) for 1 h and 3 h. The size of all nanoparticles was less than 100 nm at pH 4.0; over 50% of nanoparticles have less than 100 nm at pH 5.0; at pH 6.0 and 8.0 over 90% of nanoparticles have less than 100 nm in size. At neutral pH (7.0) nanoparticles reach a size around 120 nm and only 20% of them were less than 100 nm. When looking at temperature effect, nanoparticles did not show a significant difference in size when they were incubated between 0 and 3 h at 60 ºC. Meanwhile at 80 °C the nanoparticles suspension lost its homogeneity. A change in size was observed from 0 h of incubation at 80ºC, observing a size range between 40-160 nm, with 20% of them over 100 nm. Meanwhile after 3 h of incubation at size range changed to 60-180 nm with 50% of them over 100 nm. At 100 °C the nanoparticles aggregate forming nanorod structures. In conclusion, these results indicate that is possible to modulate size and shape of biologically synthesized nanoparticles by modulating pH and temperature.
Keywords: Genus Geobacillus, NADPH/NADH-dependent reductase, Selenium nanoparticles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 230873 Phelipanche ramosa (L. - Pomel) Control in Field Tomato Crop
Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L., Carriero F., Cibelli F., Raimondo M. L., Tarantino E.
Abstract:
The tomato is a very important crop, whose cultivation in the Mediterranean basin is severely affected by the phytoparasitic weed Phelipanche ramosa. The semiarid regions of the world are considered the main areas where this parasitic weed is established causing heavy infestation as it is able to produce high numbers of seeds (up to 500,000 per plant), which remain viable for extended period (more than 20 years). In this paper the results obtained from eleven treatments in order to control this parasitic weed including chemical, agronomic, biological and biotechnological methods compared with the untreated test under two plowing depths (30 and 50 cm) are reported. The split-plot design with 3 replicates was adopted. In 2014 a trial was performed in Foggia province (southern Italy) on processing tomato (cv Docet) grown in the field infested by Phelipanche ramosa. Tomato seedlings were transplant on May 5, on a clay-loam soil. During the growing cycle of the tomato crop, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each plot was detected. At tomato harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However, among the different methods tested, some of them which Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) under deeper plowing (50 cm depth) proved to mitigate the virulence of the Phelipanche ramose attacks. It is assumed that these effects can be improved combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.
Keywords: Control methods, Phelipanche ramosa, tomato crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 254572 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.
Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69871 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving
Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries
Abstract:
Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.
Keywords: Air jet weaving, aerodynamic simulation, energy efficiency, experimental measurements, power costs, weft insertion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150870 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.
Keywords: Diesel engine, machine learning, NOx emission, semi-empirical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85569 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery
Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr
Abstract:
The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.Keywords: Bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149868 Flow Duration Curves and Recession Curves Connection through a Mathematical Link
Authors: Elena Carcano, Mirzi Betasolo
Abstract:
This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.
Keywords: Chronological sequence of discharges, recession curves, streamflow duration curves, water concession.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59567 Library Aware Power Conscious Realization of Complementary Boolean Functions
Authors: Padmanabhan Balasubramanian, C. Ardil
Abstract:
In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Keywords: Reed-Muller forms, Logic function, Hammingdistance, Algebraic factorization, Low power design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181166 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining
Authors: Alexandru Epureanu, Virgil Teodor
Abstract:
One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144965 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66364 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64963 Innovation in “Low-Tech” Industries: Portuguese Footwear Industry
Authors: António Marques, Graça Guedes
Abstract:
The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also, analyses the strategy follow in the innovation process and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. The Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.
Keywords: Footwear industry, innovation strategy, low-tech industry, Oslo Manual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176862 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.
Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129461 Circular Economy Maturity Models: A Systematic Literature Review
Authors: D. Kreutzer, S. Müller-Abdelrazeq, I. Isenhardt
Abstract:
Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation because this change affects not only production but also the entire company. Maturity models offer an approach to determine the current status of companies’ transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g., IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyze the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. For this purpose, circular economy maturity models at the company's (micro) level were identified from the literature, compared, and analyzed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the number and types of indicators as well as their metrics. For example, most models use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.
Keywords: Circular economy, maturity model, maturity assessment, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22260 Pilot Scale Investigation on the Removal of Pollutants from Secondary Effluent to Meet Botswana Irrigation Standards Using Roughing and Slow Sand Filters
Authors: Moatlhodi Wise Letshwenyo, Lesedi Lebogang
Abstract:
Botswana is an arid country that needs to start reusing wastewater as part of its water security plan. Pilot scale slow sand filtration in combination with roughing filter was investigated for the treatment of effluent from Botswana International University of Science and Technology to meet Botswana irrigation standards. The system was operated at hydraulic loading rates of 0.04 m/hr and 0.12 m/hr. The results show that the system was able to reduce turbidity from 262 Nephelometric Turbidity Units to a range between 18 and 0 Nephelometric Turbidity Units which was below 30 Nephelometric Turbidity Units threshold limit. The overall efficacy ranged between 61% and 100%. Suspended solids, Biochemical Oxygen Demand, and Chemical Oxygen Demand removal efficiency averaged 42.6%, 45.5%, and 77% respectively and all within irrigation standards. Other physio-chemical parameters were within irrigation standards except for bicarbonate ion which averaged 297.7±44 mg L-1 in the influent and 196.22±50 mg L-1 in the effluent which was above the limit of 92 mg L-1, therefore averaging a reduction of 34.1% by the system. Total coliforms, fecal coliforms, and Escherichia coli in the effluent were initially averaging 1.1 log counts, 0.5 log counts, and 1.3 log counts respectively compared to corresponding influent log counts of 3.4, 2.7 and 4.1, respectively. As time passed, it was observed that only roughing filter was able to reach reductions of 97.5%, 86% and 100% respectively for faecal coliforms, Escherichia coli, and total coliforms. These organism numbers were observed to have increased in slow sand filter effluent suggesting multiplication in the tank. Water quality index value of 22.79 for the physio-chemical parameters suggests that the effluent is of excellent quality and can be used for irrigation purposes. However, the water quality index value for the microbial parameters (1820) renders the quality unsuitable for irrigation. It is concluded that slow sand filtration in combination with roughing filter is a viable option for the treatment of secondary effluent for reuse purposes. However, further studies should be conducted especially for the removal of microbial parameters using the system.
Keywords: Irrigation, roughing filter, slow sand filter, turbidity, water quality index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87459 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion
Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett
Abstract:
Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.Keywords: Cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217958 Roughness and Hardness of 60/40 Cu-Zn Alloy
Authors: Pavana Manvikar, G K Purohit
Abstract:
The functional performance of machined components, often, depends on surface topography, hardness, nature of stress and strain induced on the surface, etc. Invariably, surfaces of metallic components obtained by turning, milling, etc., consist of irregularities such as machining marks are responsible for the above. Surface finishing/coating processes used to produce improved surface quality/textures are classified as chip-removal and chip-less processes. Burnishing is chip-less cold working process carried out to improve surface finish, hardness and resistance to fatigue and corrosion; not obtainable by other surface coating and surface treatment processes. It is a very simple, but effective method which improves surface characteristics and is reported to introduce compressive stresses.
Of late, considerable attention is paid to post-machining, finishing operations, such as burnishing. During burnishing the micro-irregularities start to deform plastically, initially the crests are gradually flattened and zones of reduced deformation are formed. When all the crests are deformed, the valleys between the micro-irregularities start moving in the direction of the newly formed surface. The grain structure is then condensed, producing a smoother and harder surface with superior load-carrying and wear-resistant capabilities.
Burnishing can be performed on a lathe with a highly polished ball or roller type tool which is traversed under force over a rotating/stationary work piece. Often, several passes are used to obtain the work piece surface with the desired finish and hardness.
This paper presents the findings of an experimental investigation on the effect of ball burnishing parameters such as, burnishing speed, feed, force and number of passes; on surface roughness (Ra) and micro-hardness (Hv) of a 60/40 copper/zinc alloy, using a 2-level fractional factorial design of experiments (DoE). Mathematical models were developed to predict surface roughness and hardness generated by burnishing in terms of the above process parameters. A ball-type tool, designed and constructed from a high chrome steel material (HRC=63 and Ra=0.012 µm), was used for burnishing of fine-turned cylindrical bars (0.68-0.78µm and 145Hv). They are given by,
Ra= 0.305-0.005X1 - 0.0175X2 + 0.0525X4 + 0.0125X1X4 -0.02X2X4 - 0.0375X3X4
Hv=160.625 -2.37 5X1 + 5.125X2 + 1.875X3 + 4.375X4 - 1.625X1X4 + 4.375X2X4 - 2.375X3X4
High surface microhardness (175HV) was obtained at 400rpm, 2passes, 0.05mm/rev and 15kgf., and high surface finish (0.20µm) was achieved at 30kgf, 0.1mm/rev, 112rpm and single pass. In other words, surface finish improved by 350% and microhardness improved by 21% compared to as machined conditions.
Keywords: Ball burnishing, surface roughness, micro-hardness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253257 TheAnalyzer: Clustering-Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human-Computer Interaction
Authors: D. S. A. Nanayakkara, K. J. P. G. Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. TheAnalyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling TheAnalyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.
Keywords: Data clustering, data standardization, dimensionality reduction, human-computer interaction, user profiling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22956 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.
Keywords: Compass error, GPS, maritime navigation, mobile augmented reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79055 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship
Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris
Abstract:
A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85854 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174053 The Impact of Financial System on Mixed Use Development – Unrest in UK and Sense of Safety in Mixed Use Development
Authors: Tamara Kelly
Abstract:
The past decade has witnessed a good opportunities for city development schemes in UK. The government encouraged restoration of city centers to comprise mixed use developments with high density residential apartments. Investments in regeneration areas were doing well according to the analyses of Property Databank (IPD). However, more recent analysis by IPD has shown that since 2007, property in regeneration areas has been more vulnerable to the market downturn than other types of investment property. The early stages of a property market downturn may be felt most in regeneration where funding, investor confidence and occupier demand would dissipate because the sector was considered more marginal or risky when development costs rise. Moreover, the Bank of England survey shows that lenders have sequentially tightened the availability of credit for commercial real estate since mid-2007. A sharp reduction in the willingness of banks to lend on commercial property was recorded. The credit crunch has already affected commercial property but its impact has been particularly severe in certain kinds of properties where residential developments are extremely difficult, in particular city centre apartments and buy-to-let markets. Commercial property – retail, industrial leisure and mixed use were also pressed, in Birmingham; tens of mixed use plots were built to replace old factories in the heart of the city. The purpose of these developments was to enable young professionals to work and live in same place. Thousands of people lost their jobs during the recession, moreover lending was more difficult and the future of many developments is unknown. The recession casts its shadow upon the society due to cuts in public spending by government, Inflation, rising tuition fees and high rise in unemployment generated anger and hatred was spreading among youth causing vandalism and riots in many cities. Recent riots targeted many mixed used development in the UK where banks, shops, restaurants and big stores were robbed and set into fire leaving residents with horror and shock. This paper examines the impact of the recession and riots on mixed use development in UK.Keywords: Diversity, mixed use development, outdoor comfort, public realm, safe places, safety by design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162352 Effect of Soil Tillage System upon the Soil Properties, Weed Control, Quality and Quantity Yield in Some Arable Crops
Authors: T Rusu, P I Moraru, I Bogdan, A I Pop, M L Sopterean
Abstract:
The paper presents the influence of the conventional ploughing tillage technology in comparison with the minimum tillage, upon the soil properties, weed control and yield in the case of maize (Zea mays L.), soya-bean (Glycine hispida L.) and winter wheat (Triticum aestivum L.) in a three years crop rotation. A research has been conducted at the University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, Romania. The use of minimum soil tillage systems within a three years rotation: maize, soya-bean, wheat favorites the rise of the aggregates hydro stability with 5.6-7.5% on a 0-20 cm depth and 5-11% on 20-30 cm depth. The minimum soil tillage systems – paraplow, chisel or rotary grape – are polyvalent alternatives for basic preparation, germination bed preparation and sowing, for fields and crops with moderate loose requirements being optimized technologies for: soil natural fertility activation and rationalization, reduction of erosion, increasing the accumulation capacity for water and realization of sowing in the optimal period. The soil tillage system influences the productivity elements of cultivated species and finally the productions thus obtained. Thus, related to conventional working system, the productions registered in minimum tillage working represented 89- 97% in maize, 103-112% in soya-bean, 93-99% in winter-wheat. The results of investigations showed that the yield is a conclusion soil tillage systems influence on soil properties, plant density assurance and on weed control. Under minimum tillage systems in the case of winter weat as an option for replacing classic ploughing, the best results in terms of quality indices were obtained from version worked with paraplow, followed by rotary harrow and chisel. At variants worked with paraplow were obtained quality indices close to those of the variant worked with plow, and protein and gluten content was even higher. At Ariesan variety, highest protein content, 12.50% and gluten, 28.6% was obtained for the variant paraplow.Keywords: Minimum tillage, soil properties, yields quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919