Search results for: slow sand filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2208

Search results for: slow sand filter

258 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 118
257 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil

Authors: Yasser R. Tawfic, Mohamed A. Eid

Abstract:

Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.

Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures

Procedia PDF Downloads 188
256 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 79
255 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems

Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm

Abstract:

Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.

Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa

Procedia PDF Downloads 309
254 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 302
253 Thorium Resources of Georgia – Is It Its Future Energy ?

Authors: Avtandil Okrostsvaridze, Salome Gogoladze

Abstract:

In the light of exhaustion of hydrocarbon reserves of new energy resources, its search is of vital importance problem for the modern civilization. At the time of energy resource crisis, the radioactive element thorium (232Th) is considered as the main energy resource for the future of our civilization. Modern industry uses thorium in high-temperature and high-tech tools, but the most important property of thorium is that like uranium it can be used as fuel in nuclear reactors. However, thorium has a number of advantages compared to this element: Its concentration in the earth crust is 4-5 times higher than uranium; extraction and enrichment of thorium is much cheaper than of uranium; it is less radioactive; its waste products complete destruction is possible; thorium yields much more energy than uranium. Nowadays, developed countries, among them India and China, have started intensive work for creation of thorium nuclear reactors and intensive search for thorium reserves. It is not excluded that in the next 10 years these reactors will completely replace uranium reactors. Thorium ore mineralization is genetically related to alkaline-acidic magmatism. Thorium accumulations occur as in endogen marked as in exogenous conditions. Unfortunately, little is known about the reserves of this element in Georgia, as planned prospecting-exploration works of thorium have never been carried out here. Although, 3 ore occurrences of this element are detected: 1) In the Greater Caucasus Kakheti segment, in the hydrothermally altered rocks of the Lower Jurassic clay-shales, where thorium concentrations varied between 51 - 3882g/t; 2) In the eastern periphery of the Dzirula massif, in the hydrothermally alteration rocks of the cambrian quartz-diorite gneisses, where thorium concentrations varied between 117-266 g/t; 3) In active contact zone of the Eocene volcanites and syenitic intrusive in Vakijvari ore field of the Guria region, where thorium concentrations varied between 185 – 428 g/t. In addition, geological settings of the areas, where thorium occurrences were fixed, give a theoretical basis on possible accumulation of practical importance thorium ores. Besides, the Black Sea Guria region magnetite sand which is transported from Vakijvari ore field, should contain significant reserves of thorium. As the research shows, monazite (thorium containing mineral) is involved in magnetite in the form of the thinnest inclusions. The world class thorium deposit concentrations of this element vary within the limits of 50-200 g/t. Accordingly, on the basis of these data, thorium resources found in Georgia should be considered as perspective ore deposits. Generally, we consider that complex investigation of thorium should be included into the sphere of strategic interests of the state, because future energy of Georgia, will probably be thorium.

Keywords: future energy, Georgia, ore field, thorium

Procedia PDF Downloads 469
252 Secondhand Clothing and the Future of Fashion

Authors: Marike Venter de Villiers, Jessica Ramoshaba

Abstract:

In recent years, the fashion industry has been associated with the exploitation of both people and resources. This is largely due to the emergence of the fast fashion concept, which entails rapid and continual style changes where clothes quickly lose their appeal, become out-of-fashion, and are then disposed of. This cycle often entails appalling working conditions in sweatshops with low wages, child labor, and a significant amount of textile waste that ends up in landfills. Although the awareness of the negative implications of ‘mindless fashion production and consumption’ is growing, fast fashion remains to be a popular choice among the youth. This is especially prevalent in South Africa, a poverty-stricken country where a vast number of young adults are unemployed and living in poverty. Despite being in poverty, the celebrity conscious culture and fashion products frequently portrayed on the growing intrusive social media platforms in South Africa pressurizes the consumers to purchase fashion and luxury products. Young adults are therefore more vulnerable to the temptation to purchase fast fashion products. A possible solution to the detrimental effects that the fast fashion industry has on the environment is the revival of the secondhand clothing trend. Although the popularity of secondhand clothing has gained momentum among selected consumer segments, the adoption rate of such remains slow. The main purpose of this study was to explore consumers’ perceptions of the secondhand clothing trend and to gain insight into factors that inhibit the adoption of secondhand clothing. This study also aimed to investigate whether consumers are aware of the negative implications of the fast fashion industry and their likelihood to shift their clothing purchases to that of secondhand clothing. By means of a quantitative study, fifty young females were asked to complete a semi-structured questionnaire. The researcher approached females between the ages of 18 and 35 in a face-to-face setting. The results indicated that although they had an awareness of the negative consequences of fast fashion, they lacked detailed insight into the pertinent effects of fast fashion on the environment. Further, a number of factors inhibit their decision to buy from secondhand stores: firstly, the accessibility to the latest trends was not always available in secondhand stores; secondly, the convenience of shopping from a chain store outweighs the inconvenience of searching for and finding a secondhand store; and lastly, they perceived secondhand clothing to pose a hygiene risk. The findings of this study provide fashion marketers, and secondhand clothing stores, with insight into how they can incorporate the secondhand clothing trend into their strategies and marketing campaigns in an attempt to make the fashion industry more sustainable.

Keywords: eco-friendly fashion, fast fashion, secondhand clothing, eco-friendly fashion

Procedia PDF Downloads 111
251 Plastic Waste Sorting by the People of Dakar

Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart

Abstract:

In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.

Keywords: behavior, Dakar, plastic waste, waste management

Procedia PDF Downloads 66
250 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 214
249 Effect of Plant Growth Promoting Rhizobacteria on the Germination and Early Growth of Onion (Allium cepa)

Authors: Dragana R. Stamenov, Simonida S. Djuric, Timea Hajnal Jafari

Abstract:

Plant growth promoting rhizobacteria (PGPR) are a heterogeneous group of bacteria that can be found in the rhizosphere, at root surfaces and in association with roots, enhancing the growth of the plant either directly and/or indirectly. Increased crop productivity associated with the presence of PGPR has been observed in a broad range of plant species, such as raspberry, chickpeas, legumes, cucumber, eggplant, pea, pepper, radish, tobacco, tomato, lettuce, carrot, corn, cotton, millet, bean, cocoa, etc. However, until now there has not been much research about influences of the PGPR on the growth and yield of onion. Onion (Allium cepa L.), of the Liliaceae family, is a species of great economic importance, widely cultivated all over the world. The aim of this research was to examine the influence of plant growth promoting bacteria Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, Bacillus subtillis and Azotobacter sp. on the seed germination and early growth of onion (Allium cepa). PGPR Azotobacter sp., Bacillus subtilis, Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, from the collection of the Faculty of Agriculture, Novi Sad, Serbia, were used as inoculants. The number of cells in 1 ml of the inoculum was 10⁸ CFU/ml. The control variant was not inoculated. The effect of PGPR on seed germination and hypocotyls length of Allium cepa was evaluated in controlled conditions, on filter paper in the dark at 22°C, while effect on the plant length and mass in semicontrol conditions, in 10 l volume vegetative pots. Seed treated with fungicide and untreated seed were used. After seven days the percentage of germination was determined. After seven and fourteen days hypocotil length was measured. Fourteen days after germination, length and mass of plants were measured. Application of Pseudomonas sp. Dragana and Kiš and Bacillus subtillis had a negative effect on onion seed germination, while the use of Azotobacter sp. gave positive results. On average, application of all investigated inoculants had a positive effect on the measured parameters of plant growth. Azotobacter sp. had the greatest effect on the hypocotyls length, length and mass of the plant. In average, better results were achieved with untreated seeds in compare with treated. Results of this study have shown that PGPR can be used in the production of onion.

Keywords: germination, length, mass, microorganisms, onion

Procedia PDF Downloads 209
248 Risk Based Inspection and Proactive Maintenance for Civil and Structural Assets in Oil and Gas Plants

Authors: Mohammad Nazri Mustafa, Sh Norliza Sy Salim, Pedram Hatami Abdullah

Abstract:

Civil and structural assets normally have an average of more than 30 years of design life. Adding to this advantage, the assets are normally subjected to slow degradation process. Due to the fact that repair and strengthening work for these assets are normally not dependent on plant shut down, the maintenance and integrity restoration of these assets are mostly done based on “as required” and “run to failure” basis. However unlike other industries, the exposure in oil and gas environment is harsher as the result of corrosive soil and groundwater, chemical spill, frequent wetting and drying, icing and de-icing, steam and heat, etc. Due to this type of exposure and the increasing level of structural defects and rectification in line with the increasing age of plants, assets integrity assessment requires a more defined scope and procedures that needs to be based on risk and assets criticality. This leads to the establishment of risk based inspection and proactive maintenance procedure for civil and structural assets. To date there is hardly any procedure and guideline as far as integrity assessment and systematic inspection and maintenance of civil and structural assets (onshore) are concerned. Group Technical Solutions has developed a procedure and guideline that takes into consideration credible failure scenario, assets risk and criticality from process safety and structural engineering perspective, structural importance, modeling and analysis among others. Detailed inspection that includes destructive and non-destructive tests (DT & NDT) and structural monitoring is also being performed to quantify defects, assess severity and impact on integrity as well as identify the timeline for integrity restoration. Each defect and its credible failure scenario is assessed against the risk on people, environment, reputation and production loss. This technical paper is intended to share on the established procedure and guideline and their execution in oil & gas plants. In line with the overall roadmap, the procedure and guideline will form part of specialized solutions to increase production and to meet the “Operational Excellence” target while extending service life of civil and structural assets. As the result of implementation, the management of civil and structural assets is now more systematically done and the “fire-fighting” mode of maintenance is being gradually phased out and replaced by a proactive and preventive approach. This technical paper will also set the criteria and pose the challenge to the industry for innovative repair and strengthening methods for civil & structural assets in oil & gas environment, in line with safety, constructability and continuous modification and revamp of plant facilities to meet production demand.

Keywords: assets criticality, credible failure scenario, proactive and preventive maintenance, risk based inspection

Procedia PDF Downloads 373
247 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings

Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan

Abstract:

Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.

Keywords: earthquake, ground motion, microtremor, seismic microzonation

Procedia PDF Downloads 450
246 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents

Authors: A. Kesraoui, M. Seffen

Abstract:

Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.

Keywords: adsorption, alternating current, dyes, modeling

Procedia PDF Downloads 135
245 Intervention Program for Emotional Management in Disruptive Situations Through Self-Compassion and Compassion

Authors: M. Bassas, J. Grané-Morcillo, J. Segura, J.M. Soldevila

Abstract:

Mental health prevention is key in a society where, according to the World Health Organization, the fourth leading cause of death worldwide is suicide. Compassion is closely linked to personal growth. It shows once again that therapies based on prevention remain an urgent and social need. In this sense, a growing body of research demonstrates how cultivating a compassionate mind can help alleviate and prevent a variety of psychological problems. In the early 21st century, there has been a boom in third-generation compassion-based therapies, although there is a lack of empirical evidence of their efficacy. This study proposes a psychotherapy method (‘Being Method’), whose central axis revolves around emotional management through the cultivation of compassion. Therefore, the objective of this research was to analyze the effectiveness of this method with regard to the emotional changes experienced when we focus on what we are concerned about through the filter of compassion. The Being Method was born from the influence of Buddhist philosophy and contemporary psychology based mainly on Western rationalist currents. A quantitative cross-sectional study has been carried out in a sample of women between 18 and 53 years old (n=47; Mage=36.02; SDage= 11.86) interested in personal growth in which the following 6 measuring instruments were administered: Peace of mind Scale (PoM), Rosenberg Self-Esteem Scale (RSES), Subjective Happiness Scale (SHS), 2 Sacles of the Compassionate Action and Engagement Scales (CAES), Coping Response Inventory for Adults (CRI-A) and Cognitive-Behavioral Strategies Evaluation Scale (MOLDES). Following an experimental method approach, participants were divided into an experimental and control group. Longitudinal analysis was also carried out through a pre-post program comparison. Pre-post comparison outcomes indicated significant differences (p<.05) between before and after the therapy in the variables Peace of Mind, Self-esteem, Happiness, Self-compassion (A-B), Compassion (A-B), in several mental molds, as well as in several coping strategies. Also, between-groups tests proved significantly higher means obtained in the experimental group. Thus, these outcomes highlighted the effectiveness of the therapy, improving all the analyzed dimensions. The social, clinical and research implications are discussed.

Keywords: being method, compassion, effectiveness, emotional management, intervention program, personal growth therapy

Procedia PDF Downloads 18
244 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 148
243 Nuclear Near Misses and Their Learning for Healthcare

Authors: Nick Woodier, Iain Moppett

Abstract:

Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.

Keywords: culture, definitions, near miss, nuclear safety, patient safety

Procedia PDF Downloads 85
242 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 121
241 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 38
240 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 139
239 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 155
238 Walkability with the Use of Mobile Apps

Authors: Dimitra Riza

Abstract:

This paper examines different ways of exploring a city by using smart phones' applications while walking, and the way this new attitude will change our perception of the urban environment. By referring to various examples of such applications we will consider options and possibilities that open up with new technologies, their advantages and disadvantages, as well as ways of experiencing and interpreting the urban environment. The widespread use of smart phones gave access to information, maps, knowledge, etc. at all times and places. The city tourism marketing takes advantage of this event and promotes the city's attractions through technology. Mobile mediated walking tours, provide new possibilities and modify the way we used to explore cities, for instance by giving directions proper to find easily destinations, by displaying our exact location on the map, by creating our own tours through picking points of interest and interconnecting them to create a route. These apps act as interactive ones, as they filter the user's interests, movements, etc. Discovering a city on foot and visiting interesting sites and landmarks, became very easy, and has been revolutionized through the help of navigational and other applications. In contrast to the re-invention of the city as suggested by the Baudelaire's Flâneur in the 19th century, or to the construction of situations by the Situationists in 60s, the new technological means do not allow people to "get lost", as these follow and record our moves. In the case of strolling or drifting around the city, the option of "getting lost" is desired, as the goal is not the "wayfinding" or the destination, but it is the experience of walking itself. Getting lost is not always about dislocation, but it is about getting a feeling, free of the urban environment while experiencing it. So, on the one hand, walking is considered to be a physical and embodied experience, as the observer becomes an actor and participates with all his senses in the city activities. On the other hand, the use of a screen turns out to become a disembodied experience of the urban environment, as we perceive it in a fragmented and distanced way. Relations with the city are similar to Alberti’s isolated viewer, detached from any urban stage. The smartphone, even if we are present, acts as a mediator: we interact directly with it and indirectly with the environment. Contrary to the Flaneur and to the Situationists, who discovered the city with their own bodies, today the body itself is being detached from that experience. While contemporary cities turn out to become more walkable, the new technological applications tend to open out all possibilities in order to explore them by suggesting multiple routes. Exploration becomes easier, but Perception changes.

Keywords: body, experience, mobile apps, walking

Procedia PDF Downloads 392
237 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 50
236 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 101
235 Challenges Faced by the Parents of Mentally Challenged Children in India

Authors: Chamaraja Parulli

Abstract:

Family is an important social institution devoted to the growth of a child, and parents are the important agents of socialization. Mentally challenged children are those who are affected by intellectual disability, which is manifested by limitation in intellectual functioning and adoptive behavior. Intellectual disability affects about 3-4 percentage of the general population. Intellectual disability is caused by genetic condition, problems during pregnancy, problems during childbirth, or illness. Mental retardation is the world’s most complex and challenging issue. The stigmatization of disability results in social and economic marginalization. Parents of the mentally challenged children will have a very high level of parenting stress, which is significantly more than the stress perceived by the parents of the children without disability. The prevalence of severe mental disorder called Schizophrenia is among 1.1 percent of the total population in India. On the other hand, 11 to 12 percent is the overall lifetime occurrence rate of mental disorders. While the government has a separate program for mental health, the segment is marred by lack of adequate doctors and infrastructure. Mentally retarded children have certain limitations in mental functioning and skills, which makes them slow learners in speaking, walking, and taking care of their personal needs such as dressing and eating. Accepting a child with mental handicap becomes difficult for parents and to the whole family, as they have to face many problems, including those of management, finance, deprivation of rest, and leisure. Also, the problems faced by the parents can be seen in different areas like – educational, psychological, social, emotional, financial and family related issues. The study brought out various difficulties and problems faced by the parents as well as family members. The findings revealed that the mental retardation is not only a medico-psychological problem but also a socio-cultural problem. The study results, however, indicate that the quality of life of the family having children with mental retardation can be improved to a greater extent by building up a child-friendly ambience at home. The main aim of the present study is to assess the problems faced by the parents of mentally challenged children, with the help of personal interview data collected from the parents of mentally challenged children, residing in Shimoga District of Karnataka State, India. These individuals were selected using stratified random sampling method. Organizing effective intervention programs for parents, family, society, and educational institutions towards reduction of family stress, augmenting the family’s strengths, increasing child’s competence and enhancing the positive attitudes and values of the society will go a long way for the peaceful existence of the mentally challenged children.

Keywords: mentally challenged children, intellectual disability, special children, social infrastructure, differently abled, psychological stress, marginalization

Procedia PDF Downloads 93
234 Investigating the Indoor Air Quality of the Respiratory Care Wards

Authors: Yu-Wen Lin, Chin-Sheng Tang, Wan-Yi Chen

Abstract:

Various biological specimens, drugs, and chemicals exist in the hospital. The medical staffs and hypersensitive inpatients expose might expose to multiple hazards while they work or stay in the hospital. Therefore, the indoor air quality (IAQ) of the hospital should be paid more attention. Respiratory care wards (RCW) are responsible for caring the patients who cannot spontaneously breathe without the ventilators. The patients in RCW are easy to be infected. Compared to the bacteria concentrations of other hospital units, RCW came with higher values in other studies. This research monitored the IAQ of the RCW and checked the compliances of the indoor air quality standards of Taiwan Indoor Air Quality Act. Meanwhile, the influential factors of IAQ and the impacts of ventilator modules, with humidifier or with filter, were investigated. The IAQ of two five-bed wards and one nurse station of a RCW in a regional hospital were monitored. The monitoring was proceeded for 16 hours or 24 hours during the sampling days with a sampling frequency of 20 minutes per hour. The monitoring was performed for two days in a row and the AIQ of the RCW were measured for eight days in total. The concentrations of carbon dioxide (CO₂), carbon monoxide (CO), particulate matter (PM), nitrogen oxide (NOₓ), total volatile organic compounds (TVOCs), relative humidity (RH) and temperature were measured by direct reading instruments. The bioaerosol samples were taken hourly. The hourly air change rate (ACH) was calculated by measuring the air ventilation volume. Human activities were recorded during the sampling period. The linear mixed model (LMM) was applied to illustrate the impact factors of IAQ. The concentrations of CO, CO₂, PM, bacterial and fungi exceeded the Taiwan IAQ standards. The major factors affecting the concentrations of CO, PM₁ and PM₂.₅ were location and the number of inpatients. The significant factors to alter the CO₂ and TVOC concentrations were location and the numbers of in-and-out staff and inpatients. The number of in-and-out staff and the level of activity affected the PM₁₀ concentrations statistically. The level of activity and the numbers of in-and-out staff and inpatients are the significant factors in changing the bacteria and fungi concentrations. Different models of the patients’ ventilators did not affect the IAQ significantly. The results of LMM can be utilized to predict the pollutant concentrations under various environmental conditions. The results of this study would be a valuable reference for air quality management of RCW.

Keywords: respiratory care ward, indoor air quality, linear mixed model, bioaerosol

Procedia PDF Downloads 88
233 Interaction between Cognitive Control and Language Processing in Non-Fluent Aphasia

Authors: Izabella Szollosi, Klara Marton

Abstract:

Aphasia can be defined as a weakness in accessing linguistic information. Accessing linguistic information is strongly related to information processing, which in turn is associated with the cognitive control system. According to the literature, a deficit in the cognitive control system interferes with language processing and contributes to non-fluent speech performance. The aim of our study was to explore this hypothesis by investigating how cognitive control interacts with language performance in participants with non-fluent aphasia. Cognitive control is a complex construct that includes working memory (WM) and the ability to resist proactive interference (PI). Based on previous research, we hypothesized that impairments in domain-general (DG) cognitive control abilities have negative effects on language processing. In contrast, better DG cognitive control functioning supports goal-directed behavior in language-related processes as well. Since stroke itself might slow down information processing, it is important to examine its negative effects on both cognitive control and language processing. Participants (N=52) in our study were individuals with non-fluent Broca’s aphasia (N = 13), with transcortical motor aphasia (N=13), individuals with stroke damage without aphasia (N=13), and unimpaired speakers (N = 13). All participants performed various computer-based tasks targeting cognitive control functions such as WM and resistance to PI in both linguistic and non-linguistic domains. Non-linguistic tasks targeted primarily DG functions, while linguistic tasks targeted more domain specific (DS) processes. The results showed that participants with Broca’s aphasia differed from the other three groups in the non-linguistic tasks. They performed significantly worse even in the baseline conditions. In contrast, we found a different performance profile in the linguistic domain, where the control group differed from all three stroke-related groups. The three groups with impairment performed more poorly than the controls but similar to each other in the verbal baseline condition. In the more complex verbal PI condition, however, participants with Broca’s aphasia performed significantly worse than all the other groups. Participants with Broca’s aphasia demonstrated the most severe language impairment and the highest vulnerability in tasks measuring DG cognitive control functions. Results support the notion that the more severe the cognitive control impairment, the more severe the aphasia. Thus, our findings suggest a strong interaction between cognitive control and language. Individuals with the most severe and most general cognitive control deficit - participants with Broca’s aphasia - showed the most severe language impairment. Individuals with better DG cognitive control functions demonstrated better language performance. While all participants with stroke damage showed impaired cognitive control functions in the linguistic domain, participants with better language skills performed also better in tasks that measured non-linguistic cognitive control functions. The overall results indicate that the level of cognitive control deficit interacts with the language functions in individuals along with the language spectrum (from severe to no impairment). However, future research is needed to determine any directionality.

Keywords: cognitive control, information processing, language performance, non-fluent aphasia

Procedia PDF Downloads 101
232 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows

Authors: C. D. Ellis, H. Xia, X. Chen

Abstract:

Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.

Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics

Procedia PDF Downloads 212
231 Influence of Biochar Application on Growth, Dry Matter Yield and Nutrition of Corn (Zea mays L.) Grown on Sandy Loam Soils of Gujarat, India

Authors: Pravinchandra Patel

Abstract:

Sustainable agriculture in sandy loam soil generally faces large constraints due to low water holding and nutrient retention capacity, and accelerated mineralization of soil organic matter. There is need to increase soil organic carbon in the soil for higher crop productivity and soil sustainability. Recently biochar is considered as sixth element and work as a catalyst for increasing crop yield, soil fertility, soil sustainability and mitigation of climate change. Biochar was generated at the Sansoli Farm of Anand Agricultural University, Gujarat, India by pyrolysis at temperatures (250-400°C) in absence of oxygen using slow chemical process (using two kilns) from corn stover (Zea mays, L), cluster bean stover (Cyamopsis tetragonoloba) and Prosopis julifera wood. There were 16 treatments; 4 organic sources (3 biochar; corn stover biochar (MS), cluster bean stover (CB) & Prosopis julifera wood (PJ) and one farmyard manure-FYM) with two rate of application (5 & 10 metric tons/ha), so there were eight treatments of organic sources. Eight organic sources was applied with the recommended dose of fertilizers (RDF) (80-40-0 kg/ha N-P-K) while remaining eight organic sources were kept without RDF. Application of corn stover biochar @ 10 metric tons/ha along with RDF (RDF+MS) increased dry matter (DM) yield, crude protein (CP) yield, chlorophyll content and plant height (at 30 and 60 days after sowing) than CB and PJ biochar and FYM. Nutrient uptake of P, K, Ca, Mg, S and Cu were significantly increased with the application of RDF + corn stover @ 10 metric tons/ha while uptake of N and Mn were significantly increased in RDF + corn stover @ 5 metric tons/ha. It was found that soil application of corn stover biochar @ 10 metric tons/ha along with the recommended dose of chemical fertilizers (RDF+MS ) exhibited the highest impact in obtaining significantly higher dry matter and crude protein yields and larger removal of nutrients from the soil and it also beneficial for built up nutrients in soil. It also showed significantly higher organic carbon content and cation exchange capacity in sandy loam soil. The lower dose of corn stover biochar @ 5 metric tons/ha (RDF+ MS) was also remained the second highest for increasing dry matter and crude protein yields of forage corn crop which ultimately resulted in larger removals of nutrients from the soil. This study highlights the importance of mixing of biochar along with recommended dose of fertilizers on its synergistic effect on sandy loam soil nutrient retention, organic carbon content and water holding capacity hence, the amendment value of biochar in sandy loam soil.

Keywords: biochar, corn yield, plant nutrient, fertility status

Procedia PDF Downloads 121
230 Enhancing Social Well-Being in Older Adults Through Tailored Technology Interventions: A Future Systematic Review

Authors: Rui Lin, Jimmy Xiangji Huang, Gary Spraakman

Abstract:

This forthcoming systematic review will underscore the imperative of leveraging technology to mitigate social isolation in older adults, particularly in the context of unprecedented global challenges such as the COVID-19 pandemic. With the continual evolution of technology, it becomes crucial to scrutinize the efficacy of interventions and discern how they can alleviate social isolation and augment social well-being among the elderly. This review will strive to clarify the best methods for older adults to utilize cost-effective and user-friendly technology and will investigate how the adaptation and execution of such interventions can be fine-tuned to maximize their positive outcomes. The study will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to filter pertinent studies. We foresee conducting an analysis of articles and executing a narrative analysis to discover themes and indicators related to quality of life and, technology use and well-being. The review will examine how involving older adults at the community level, applying top practices from community-based participatory research, can establish efficient strategies to implement technology-based interventions designed to diminish social isolation and boost digital use self-efficacy. Applications based on mobile technology and virtual platforms are set to assume a crucial role not only in enhancing connections within families but also in connecting older adults to vital healthcare resources, fostering both physical and mental well-being. The review will investigate how technological devices and platforms can address the cognitive, visual, and auditory requirements of older adults, thus strengthening their confidence and proficiency in digital use—a crucial factor during enforced social distancing or self-isolation periods during pandemics. This review will endeavor to provide insights into the multifaceted benefits of technology for older adults, focusing on how tailored technological interventions can be a beacon of social and mental wellness in times of social restrictions. It will contribute to the growing body of knowledge on the intersection of technology and elderly well-being, offering nuanced understandings and practical implications for developing user-centric, effective, and inclusive technological solutions for older populations.

Keywords: older adults, health service delivery, digital health, social isolation, social well-being

Procedia PDF Downloads 36
229 The Implementation of Human Resource Information System in the Public Sector: An Exploratory Study of Perceived Benefits and Challenges

Authors: Aneeqa Suhail, Shabana Naveed

Abstract:

The public sector (in both developed and developing countries) has gone through various waves of radical reforms in recent decades. In Pakistan, under the influence of New Public Management(NPM) Reforms; best practices of private sector are introduced in the public sector to modernize public organizations. Human Resource Information System (HRIS) has been popular in the private sector and proven to be a successful system, therefore it is being adopted in the public sector too. However, implementation of private business practices in public organizations us very challenging due to differences in context. This implementation gets further critical in Pakistan due to a centralizing tendency and lack of autonomy in public organizations. Adoption of HRIS by public organizations in Pakistan raises several questions: What challenges are faced by public organizations in implementation of HRIS? Are benefits of HRIS such as efficiency, process integration and cost reduction achieved? How is the previous system improved with this change and what are the impacts? Yet, it is an under-researched topic, especially in public enterprises. This study contributes to the existing body of knowledge by empirically exploring benefits and challenges of implementation of HRIS in public organizations. The research adopts a case study approach and uses qualitative data based on in-depth interviews conducted at various levels in the hierarchy including top management, departmental heads and employees. The unit of analysis is LESCO, the Lahore Electric Supply Company, a state-owned entity that generates, transmits and distributes electricity to 4 big cities in Punjab, Pakistan. The findings of the study show that LESCO has not achieved the benefits of HRIS as established in literature. The implementation process remained quite slow and costly. Various functions of HR are still in isolation and integration is a big challenge for the organization. Although the data is automated, the previous system of manually record maintenance and paperwork is still in work, resulting in the presence of parallel practices. The findings also identified resistance to change from top management and labor workforce, lack of commitment and technical knowledge, and costly vendors as major barriers that affect the effective implementation of HRIS. The paper suggests some potential actions to overcome these barriers and to enhance effective implementation of HR-technology. The findings are explained in light of an institutional logics perspective. HRIS’ new logic of automated and integrated HR system is in sharp contrast with the prevailing logic of process-oriented manual data maintenance, leading to resistance to change and deadlock.

Keywords: human resource information system, technological changes, state-owned enterprise, implementation challenges

Procedia PDF Downloads 130