Search results for: font distribution algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8356

Search results for: font distribution algorithm

316 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 159
315 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 96
314 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City

Authors: Oscar Nieto-Garzón, Angélica Lozano

Abstract:

In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.

Keywords: COVID-19, emissions, freight transport, latin American metropolis

Procedia PDF Downloads 135
313 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 40
312 Examination of Indoor Air Quality of Naturally Ventilated Dwellings During Winters in Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air quality as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. The built environment can affect health directly and indirectly through immediate or long-term exposure to indoor air pollutants. Health effects associated with indoor air pollutants include eye/nose/throat irritation, respiratory diseases, heart disease, and even cancer. This study attempts to demonstrate the causal relationship between the indoor air quality and its determining aspects. Detailed indoor air quality audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavorable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses indoor air quality based on factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, urban housing, air pollution, natural ventilation, architecture, urban issues

Procedia PDF Downloads 120
311 Influencing Factors on Stability of Shale with Silt Layers at Slopes

Authors: A. K. M. Badrul Alam, Yoshiaki Fujii, Nahid Hasan Dipu, Shakil Ahmed Razo

Abstract:

Shale rockmasses often include silt layers, impacting slope stability in construction and mining. Analyzing their interaction is crucial for long-term stability. A study used an elastoplastic model, incorporating the stress transfer method and Coulomb's criterion, to assess a shale rock mass with silt layers. It computed stress distribution, assessed failure potential, and identified vulnerable regions where nodal forces were calculated for a comprehensive analysis. A shale rock mass ranging from 14.75 to 16.75 meters thick, with silt layers varying from 0.36 to 0.5 meters, was considered in the model. It examined four silt layer conditions: horizontal (SiHL), vertical (SiVL), inclined against slope (SiIincAGS), and along slope (SilincALO). Mechanical parameters like uniaxial compressive strength (UCS), tensile strength (TS), Young’s modulus (E), Poisson’s ratio, and density were adjusted for varied scenarios: UCS (0.5 to 5 MPa), TS (0.1 to 1 MPa), and E (6 to 60 MPa). In elastic analysis of shale rock masses, stress distributions vary based on layer properties. When shale and silt layers have the same elasticity modulus (E), stress concentrates at corners. If the silt layer has a lower E than shale, marginal changes in maximum stress (σmax) occur for SilHL. A decrease in σmax is evident at SilVL. Slight variations in σmax are observed for SilincAGS and SilincALO. In the elastoplastic analysis, the overall decrease of 20%, 40%, 60%, 80%, and 90% was considered. For SilHL:(i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: strength decrease led to shear (S), tension then shear (T then S) failure; noticeable failure at 60% decrease, significant at 80%, collapse at 90%. (ii) Lower E for silt layer, same strength as shale: No significant differences. (iii) Lower E and UCS, silt layer strength 1/10: No significant differences. For SilVL: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Similar effects as SilHL. (ii) Lower E for silt layer, same strength as shale: Slip occurred. (iii) Lower E and UCS, silt layer strength 1/10: Bitension failure also observed with larger slip. For SilincAGS: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Effects similar to SilHL. (ii) Lower E for silt layer, same strength as shale: Slip occurred. (iii) Lower E and UCS, silt layer strength 1/10: Tension failure also observed with larger slip. For SilincALO: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Similar to SilHL with tension failure. (ii) Lower E for silt layer, same strength as shale: No significant differences; failure diverged. (iii) Lower E and UCS, silt layer strength 1/10: Bitension failure also observed with larger slip; failure diverged. Toppling failure was observed for lower E cases of SilVL and SilincAGS. The presence of silt interlayers in shale greatly impacts slope stability. Designing slopes requires careful consideration of both the silt and shale's mechanical properties. The temporal degradation of strength in these layers is a major concern. Thus, slope design must comprehensively analyze the immediate and long-term mechanical behavior of interlayer silt and shale to effectively mitigate instability.

Keywords: shale rock masses, silt layers, slope stability, elasto-plastic model, temporal degradation

Procedia PDF Downloads 52
310 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 73
309 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry

Authors: Dominik Halstrup, Marlene Schriever

Abstract:

The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.

Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry

Procedia PDF Downloads 224
308 Wound Healing Process Studied on DC Non-Homogeneous Electric Fields

Authors: Marisa Rio, Sharanya Bola, Richard H. W. Funk, Gerald Gerlach

Abstract:

Cell migration, wound healing and regeneration are some of the physiological phenomena in which electric fields (EFs) have proven to have an important function. Physiologically, cells experience electrical signals in the form of transmembrane potentials, ion fluxes through protein channels as well as electric fields at their surface. As soon as a wound is created, the disruption of the epithelial layers generates an electric field of ca. 40-200 mV/mm, directing cell migration towards the wound site, starting the healing process. In vitro electrotaxis, experiments have shown cells respond to DC EFs polarizing and migrating towards one of the poles (cathode or anode). A standard electrotaxis experiment consists of an electrotaxis chamber where cells are cultured, a DC power source and agar salt bridges that help delaying toxic products from the electrodes to attain the cell surface. The electric field strengths used in such an experiment are uniform and homogeneous. In contrast, the endogenous electric field strength around a wound tend to be multi-field and non-homogeneous. In this study, we present a custom device that enables electrotaxis experiments in non-homogeneous DC electric fields. Its main feature involves the replacement of conventional metallic electrodes, separated from the electrotaxis channel by agarose gel bridges, through electrolyte-filled microchannels. The connection to the DC source is made by Ag/AgCl electrodes, incased in agarose gel and placed at the end of each microfluidic channel. An SU-8 membrane closes the fluidic channels and simultaneously serves as the single connection from each of them to the central electrotaxis chamber. The electric field distribution and current density were numerically simulated with the steady-state electric conduction module from ANSYS 16.0. Simulation data confirms the application of nonhomogeneous EF of physiological strength. To validate the biocompatibility of the device cellular viability of the photoreceptor-derived 661W cell line was accessed. The cells have not shown any signs of apoptosis, damage or detachment during stimulation. Furthermore, immunofluorescence staining, namely by vinculin and actin labelling, allowed the assessment of adhesion efficiency and orientation of the cytoskeleton, respectively. Cellular motility in the presence and absence of applied DC EFs was verified. The movement of individual cells was tracked for the duration of the experiments, confirming the EF-induced, cathodal-directed motility of the studied cell line. The in vitro monolayer wound assay, or “scratch assay” is a standard protocol to quantitatively access cell migration in vitro. It encompasses the growth of a confluent cell monolayer followed by the mechanic creation of a scratch, representing a wound. Hence, wound dynamics was monitored over time and compared for control and applied the electric field to quantify cellular population motility.

Keywords: DC non-homogeneous electric fields, electrotaxis, microfluidic biochip, wound healing

Procedia PDF Downloads 269
307 Antimicrobial Efficacy of Some Antibiotics Combinations Tested against Some Molecular Characterized Multiresistant Staphylococcus Clinical Isolates, in Egypt

Authors: Nourhan Hussein Fanaki, Hoda Mohamed Gamal El-Din Omar, Nihal Kadry Moussa, Eva Adel Edward Farid

Abstract:

The resistance of staphylococci to various antibiotics has become a major concern for health care professionals. The efficacy of the combinations of selected glycopeptides (vancomycin and teicoplanin) with gentamicin or rifampicin, as well as that of gentamicin/rifampicin combination, was studied against selected pathogenic staphylococcus isolated from Egypt. The molecular distribution of genes conferring resistance to these four antibiotics was detected among tested clinical isolates. Antibiotic combinations were studied using the checkerboard technique and the time-kill assay (in both the stationary and log phases). Induction of resistance to glycopeptides in staphylococci was tried in the absence and presence of diclofenac sodium as inducer. Transmission electron microscopy was used to study the effect of glycopeptides on the ultrastructure of the cell wall of staphylococci. Attempts were made to cure gentamicin resistance plasmids and to study the transfer of these plasmids by conjugation. Trials for the transformation of the successfully isolated gentamicin resistance plasmid to competent cells were carried out. The detection of genes conferring resistance to the tested antibiotics was performed using the polymerase chain reaction. The studied antibiotic combinations proved their efficacy, especially when tested during the log phase. Induction of resistance to glycopeptides in staphylococci was more promising in presence of diclofenac sodium, compared to its absence. Transmission electron microscopy revealed the thickening of bacterial cell wall in staphylococcus clinical isolates due to the presence of tested glycopeptides. Curing of gentamicin resistance plasmids was only successful in 2 out of 9 tested isolates, with a curing rate of 1 percent for each. Both isolates, when used as donors in conjugation experiments, yielded promising conjugation frequencies ranging between 5.4 X 10-2 and 7.48 X 10-2 colony forming unit/donor cells. Plasmid isolation was only successful in one out of the two tested isolates. However, low transformation efficiency (59.7 transformants/microgram plasmid DNA) of such plasmids was obtained. Negative regulators of autolysis, such as arlR, lytR and lrgB, as well as cell-wall associated genes, such as pbp4 and/or pbp2, were detected in staphylococcus isolates with reduced susceptibility to the tested glycopeptides. Concerning rifampicin resistance genes, rpoBstaph was detected in 75 percent of the tested staphylococcus isolates. It could be concluded that in vitro studies emphasized the usefulness of the combination of vancomycin or teicoplanin with gentamicin or rifampicin, as well as that of gentamicin with rifampicin, against staphylococci showing varying resistance patterns. However, further in vivo studies are required to ensure the safety and efficacy of such combinations. Diclofenac sodium can act as an inducer of resistance to glycopeptides in staphylococci. Cell-wall thickness is a major contributor to such resistance among them. Gentamicin resistance in these strains could be chromosomally or plasmid mediated. Multiple mutations in the rpoB gene could mediate staphylococcus resistance to rifampicin.

Keywords: glycopeptides, combinations, induction, diclofenac, transmission electron microscopy, polymerase chain reaction

Procedia PDF Downloads 291
306 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 244
305 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst

Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci

Abstract:

The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.

Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel

Procedia PDF Downloads 152
304 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya

Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui

Abstract:

Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.

Keywords: multi-sectional approach, equity, people-centered, health workforce retention

Procedia PDF Downloads 113
303 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 202
302 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 445
301 Spatial and Temporal Variability of Meteorological Drought Including Atmospheric Circulation in Central Europe

Authors: Andrzej Wałęga, Marta Cebulska, Agnieszka Ziernicka-Wojtaszek, Wojciech Młocek, Agnieszka Wałęga, Tommaso Caloiero

Abstract:

Drought is one of the natural phenomena influencing many aspects of human activities like food production, agriculture, industry, and the ecological conditions of the environment. In the area of the Polish Carpathians, there are periods with a deficit of rainwater and an increasing frequency in dry months, especially in the cold half of the year. The aim of this work is a spatial and temporal analysis of drought, expressed as SPI in a heterogenous area of the Polish Carpathian and of the highland Region in the Central part of Europe based on long-term precipitation data. Also, to our best knowledge, for the first time in this work, drought characteristics analyzed via the SPI were discussed based on the atmospheric circulation calendar. The study region is the Upper Vistula Basin, located in the southern and south-eastern part of Poland. In this work, monthly precipitation from 56 rainfall stations was analysed from 1961 to 2022. The 3-, 6-, 9-, and 12-month Standardized Precipitation Index (SPI) were used as indicators of meteorological drought. For the 3-month SPI, the main climatic mechanisms determining extreme droughts were defined based on the calendar of synoptic circulations. The Mann-Kendall test was used to detect the trend of extreme droughts. Statistically significant trends of SPI were observed on 52.7% of all analyzed stations, and in most cases, a positive trend was observed. Statistically significant trends were more frequently observed in stations located in the western part of the analyzed region. Long-term droughts, represented by the 12-month SPI, occurred in all stations but not in all years. Short-term droughts (3-month SPI) were most frequent in the winter season, 6 and 9-month SPI in winter and spring, and 12-month SPI in winter and autumn, respectively. The spatial distribution of drought was highly diverse. The most intensive drought occurred in 1984, with the 6-month SPI covering 98% of the analyzed region and the 9 and 12-month SPI covering 90% of the entire region. Droughts exhibit a seasonal pattern, with a dominant 10-year periodicity for all analyzed variants of SPI. Additionally, Fourier analysis revealed a 2-year periodicity for the 3-, 6-, and 9-month SPI and a 31-year periodicity for the 12-month SPI. The results provide insights into the typical climatic conditions in Poland, with strong seasonality in precipitation. The study highlighted that short-term extreme droughts, represented by the 3-month SPI, are often caused by anticyclonic situations with high-pressure wedges Ka and Wa, and anticyclonic West as observed in 52.3% of cases. These findings are crucial for understanding the spatial and temporal variability of short and long-term extreme droughts in Central Europe, particularly for the agriculture sector dominant in the northern part of the analyzed region, where drought frequency is highest.

Keywords: atmospheric circulation, drought, precipitation, SPI, the Upper Vistula Basin

Procedia PDF Downloads 74
300 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 247
299 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi

Abstract:

Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.

Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation

Procedia PDF Downloads 319
298 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 336
297 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 324
296 The Effect of Whole-Body Vertical Rhythm Training on Fatigue, Physical Activity, and Quality of Life to the Middle-Aged and Elderly with Hemodialysis Patients

Authors: Yen-Fen Shen, Meng-Fan Li

Abstract:

The study aims to investigate the effect of full-body vertical rhythmic training on fatigue, physical activity, and quality of life among middle-aged and elderly hemodialysis patients. The study adopted a quasi-experimental research method and recruited 43 long-term hemodialysis patients from a medical center in northern Taiwan, with 23 and 20 participants in the experimental and control groups, respectively. The experimental group received full-body vertical rhythmic training as an intervention, while the control group received standard hemodialysis care without any intervention. Both groups completed the measurements by using "Fatigue Scale", "Physical Activity Scale" and "Chinese version of the Kidney Disease Quality of Life Questionnaire" before and after the study. The experimental group underwent a 10-minute full-body vertical rhythmic training three times per week, which lasted for eight weeks before receiving regular hemodialysis treatment. The data were analyzed by SPSS 25 software, including descriptive statistics such as frequency distribution, percentages, means, and standard deviations, as well as inferential statistics, including chi-square, independent samples t-test, and paired samples t-test. The study results are summarized as follows: 1. There were no significant differences in demographic variables, fatigue, physical activity, and quality of life between the experimental and control groups in the pre-test. 2. After the intervention of the “full-body vertical rhythmic training,” the experimental group showed significantly better results in the category of "feeling tired and fatigued in the lower back", "physical functioning role limitation", "bodily pain", "social functioning", "mental health", and "impact of kidney disease on life quality." 3. The paired samples t-test results revealed that the control group experienced significant differences between the pre-test and post-test in the categories of feeling tired and fatigued in the lower back, bodily pain, social functioning mental health, and impact of kidney disease on life quality, with scores indicating a decline in life quality. Conversely, the experimental group only showed a significant worsening in bodily pain" and the impact of kidney disease on life quality, with lower change values compared to the control group. Additionally, there was an improvement in the condition of "feeling tired and fatigued in the lower back" for the experimental group. Conclusion: The intervention of the “full-body vertical rhythmic training” had a certain positive effect on the quality of life of the experimental group. While it may not entirely enhance patients' quality of life, it can mitigate the negative impact of kidney disease on certain aspects of the body. The study provides clinical practice, nursing education, and research recommendations based on the results and discusses the limitations of the research.

Keywords: hemodialysis, full-body vertical rhythmic training, fatigue, physical activity, quality of life

Procedia PDF Downloads 23
295 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 265
294 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States

Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala

Abstract:

Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.

Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work

Procedia PDF Downloads 171
293 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 125
292 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 60
291 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal

Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi

Abstract:

The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.

Keywords: low income, subsidy policy, distributed energy resources, energy justice

Procedia PDF Downloads 111
290 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application

Authors: Ramesh P., Aby Joseph

Abstract:

Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.

Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling

Procedia PDF Downloads 133
289 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 276
288 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon

Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim

Abstract:

There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.

Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors

Procedia PDF Downloads 241
287 Ways for University to Conduct Research Evaluation: Based on National Research University Higher School of Economics Example

Authors: Svetlana Petrikova, Alexander Yu Kostinskiy

Abstract:

Management of research evaluation in the Higher School of Economics (HSE) originates from the HSE Academic Fund created in 2004 to facilitate and support academic research and presents its results to international academic community. As the means to inspire the applicants, science projects went through competitive selection process evaluated by the group of experts. Drastic development of HSE, quantity of applied projects for each Academic Fund competition and the need to coordinate the conduct of expert evaluation resulted in founding of the Office for Research Evaluation in 2013. The Office’s primary objective is management of research evaluation of science projects. The standards to conduct the evaluation are defined as follows: - The exercise of the process approach, the unification of the functioning of department. - The uniformity of regulatory, organizational and methodological framework. - The development of proper on-line evaluation system. - The broad involvement of external Russian and international experts, the renouncement of the usage of own employees. - The development of an algorithm to make a correspondence between experts and science projects. - The methodical usage of opened/closed international and Russian databases to extend the expert database. - The transparency of evaluation results – free access to assessment while keeping experts confidentiality. The management of research evaluation of projects is based on the sole standard, organization and financing. The standard way of conducting research evaluation at HSE is based upon Regulations on basic principles for research evaluation at HSE. These Regulations have been developed from the moment of establishment of the Office for Research Evaluation and are based on conventional corporate standards for regulatory document management. The management system of research evaluation is implemented on the process approach basis. Process approach means deployment of work as a process, which is the aggregation of interrelated and interacting activities processing inputs into outputs. Inputs are firstly client asking for the assessment to be conducted, defining the conditions for organizing and carrying of the assessment and secondly the applicant with proper for the competition application; output is assessment given to the client. While exercising process approach to clarify interrelation and interacting main parties or subjects of the assessment are determined and the way for interaction between them forms up. Parties to expert assessment are: - Ordering Party – The department of the university taking the decision to subject a project to expert assessment; - Providing Party – The department of the university authorized to provide such assessment by the Ordering Party; - Performing Party – The legal and natural entities that have expertise in the area of research evaluation. Experts assess projects in accordance with criteria and states of expert opinions approved by the Ordering Party. Objects of assessment generally are applications or HSE competition project reports. Mainly assessments are deployed for internal needs, i.e. the most ordering parties are HSE branches and departments, but assessment can also be conducted for external clients. The financing of research evaluation at HSE is based on the established corporate culture and traditions of HSE.

Keywords: expert assessment, management of research evaluation, process approach, research evaluation

Procedia PDF Downloads 253