Search results for: loss estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5146

Search results for: loss estimation

1156 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds

Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan

Abstract:

This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.

Keywords: African walnut, functional properties, proximate analysis, response surface methodology

Procedia PDF Downloads 376
1155 High Altitude Glacier Surface Mapping in Dhauliganga Basin of Himalayan Environment Using Remote Sensing Technique

Authors: Aayushi Pandey, Manoj Kumar Pandey, Ashutosh Tiwari, Kireet Kumar

Abstract:

Glaciers play an important role in climate change and are sensitive phenomena of global climate change scenario. Glaciers in Himalayas are unique as they are predominantly valley type and are located in tropical, high altitude regions. These glaciers are often covered with debris which greatly affects ablation rate of glaciers and work as a sensitive indicator of glacier health. The aim of this study is to map high altitude Glacier surface with a focus on glacial lake and debris estimation using different techniques in Nagling glacier of dhauliganga basin in Himalayan region. Different Image Classification techniques i.e. thresholding on different band ratios and supervised classification using maximum likelihood classifier (MLC) have been used on high resolution sentinel 2A level 1c satellite imagery of 14 October 2017.Here Near Infrared (NIR)/Shortwave Infrared (SWIR) ratio image was used to extract the glaciated classes (Snow, Ice, Ice Mixed Debris) from other non-glaciated terrain classes. SWIR/BLUE Ratio Image was used to map valley rock and Debris while Green/NIR ratio image was found most suitable for mapping Glacial Lake. Accuracy assessment was performed using high resolution (3 meters) Planetscope Imagery using 60 stratified random points. The overall accuracy of MLC was 85 % while the accuracy of Band Ratios was 96.66 %. According to Band Ratio technique total areal extent of glaciated classes (Snow, Ice ,IMD) in Nagling glacier was 10.70 km2 nearly 38.07% of study area comprising of 30.87 % Snow covered area, 3.93% Ice and 3.27 % IMD covered area. Non-glaciated classes (vegetation, glacial lake, debris and valley rock) covered 61.93 % of the total area out of which valley rock is dominant with 33.83% coverage followed by debris covering 27.7 % of the area in nagling glacier. Glacial lake and Debris were accurately mapped using Band ratio technique Hence, Band Ratio approach appears to be useful for the mapping of debris covered glacier in Himalayan Region.

Keywords: band ratio, Dhauliganga basin, glacier mapping, Himalayan region, maximum likelihood classifier (MLC), Sentinel-2 satellite image

Procedia PDF Downloads 212
1154 War Heritage: Different Perceptions of the Dominant Discourse among Visitors to the “Adem Jashari” Memorial Complex in Prekaz

Authors: Zana Llonçari Osmani, Nita Llonçari

Abstract:

In Kosovo, public rhetoric and popular sentiment position the War of 1998-99 (the war) as central to the formation of contemporary Kosovo's national identity. This period was marked by the forced massive displacement of Kosovo Albanians, the destruction of entire settlements, the loss of family members, and the profound emotional trauma experienced by civilians, particularly those who actively participated in the war as members of the Kosovo Liberation Army (KLA). Amidst these profound experiences, the Prekaz Massacre (The Massacre) is widely regarded as the defining event that preceded the final struggles of 1999 and the long-awaited attainment of independence. This study aims to explore how different visitors perceive the dominant discourse at The Memorial, a site dedicated to commemorating the Prekaz Massacre, and to identify the factors that influence their perceptions. The research employs a comprehensive mixed-method approach, combining online surveys, critical discourse analysis of visitor impressions, and content analysis of media representations. The findings of the study highlight the significant role played by original material remains in shaping visitor perceptions of The Memorial in comparison to the curated symbols and figurative representations interspersed throughout the landscape. While the design elements and physical layout of the memorial undeniably hold significance in conveying the memoryscape, there are notable shortcomings in enhancing the overall visitor experience. Visitors are still primarily influenced by the tangible remnants of the war, suggesting that there is room for improvement in how design elements can more effectively contribute to the memorial's narrative and the collective memory of the Prekaz Massacre.

Keywords: critical discourse analysis, memorialisation, national discourse, public rhetoric, war tourism

Procedia PDF Downloads 67
1153 Design and Implementation of Smart Watch Textile Antenna for Wi-Fi Bio-Medical Applications in Millimetric Wave Band

Authors: M. G. Ghanem, A. M. M. A. Allam, Diaa E. Fawzy, Mehmet Faruk Cengiz

Abstract:

This paper is devoted to the design and implementation of a smartwatch textile antenna for Wi-Fi bio-medical applications in millimetric wave bands. The antenna is implemented on a leather textile-based substrate to be embedded in a smartwatch. It enables the watch to pick Wi-Fi signals without the need to be connected to a mobile through Bluetooth. It operates at 60 GHz or WiGig (Wireless Gigabit Alliance) band with a wide band for higher rate applications. It also could be implemented over many stratified layers of the body organisms to be used in the diagnosis of many diseases like diabetes and cancer. The structure is designed and simulated using CST (Studio Suite) program. The wearable patch antenna has an octagon shape, and it is implemented on leather material that acts as a flexible substrate with a size of 5.632 x 6.4 x 2 mm3, a relative permittivity of 2.95, and a loss tangent of 0.006. The feeding is carried out using differential feed (discrete port in CST). The work provides five antenna implementations; antenna without ground, a ground is added at the back of the antenna in order to increase the antenna gain, the substrate dimensions are increased to 15 x 30 mm2 to resemble the real hand watch size, layers of skin and fat are added under the ground of the antenna to study the effect of human body tissues human on the antenna performance. Finally, the whole structure is bent. It is found that the antenna can achieve a simulated peak realized gain in dB of 5.68, 7.28, 6.15, 3.03, and 4.37 for antenna without ground, antenna with the ground, antenna with larger substrate dimensions, antenna with skin and fat, and bent structure, respectively. The antenna with ground exhibits high gain; while adding the human organisms absorption, the gain is degraded because of human absorption. The bent structure contributes to higher gain.

Keywords: bio medical engineering, millimetric wave, smart watch, textile antennas, Wi-Fi

Procedia PDF Downloads 99
1152 How Message Framing and Temporal Distance Affect Word of Mouth

Authors: Camille Lacan, Pierre Desmet

Abstract:

In the crowdfunding model, a campaign succeeds by collecting the funds required over a predefined duration. The success of a CF campaign depends both on the capacity to attract members of the online communities concerned, and on the community members’ involvement in online word-of-mouth recommendations. To maximize the campaign's success probability, project creators (i.e., an organization appealing for financial resources) send messages to contributors to ask them to issue word of mouth. Internet users relay information about projects through Word of Mouth which is defined as “a critical tool for facilitating information diffusion throughout online communities”. The effectiveness of these messages depends on the message framing and the time at which they are sent to contributors (i.e., at the start of the campaign or close to the deadline). This article addresses the following question: What are the effect of message framing and temporal distance on the willingness to share word of mouth? Drawing on Perspectives Theory and Construal Level Theory, this study examines the interplay between message framing (Gains vs. Losses) and temporal distance (message while the deadline is coming vs. far) on intention to share word of mouth. A between-subject experimental design is conducted to test the research model. Results show significant differences between a loss-framed message (lack of benefits if the campaign fails) associated with a short deadline (ending tomorrow) compared to a gain-framed message (benefits if the campaign succeeds) associated with a distant deadline (ending in three months). However, this effect is moderated by the anticipated regret of a campaign failure and the temporal orientation. These moderating effects contribute to specifying the boundary condition of the framing effect. Handling the message framing and the temporal distance are thus the key decisions to influence the willingness to share word of mouth.

Keywords: construal levels, crowdfunding, message framing, word of mouth

Procedia PDF Downloads 237
1151 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 271
1150 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 224
1149 The Role of Businesses in Peacebuilding in Nigeria: A Stakeholder Approach

Authors: Jamila Mohammed Makarfi, Yontem Sonmez

Abstract:

Developing countries like Nigeria have recently been affected by conflicts characterized by violence, high levels of risk and insecurity, resulting in loss of lives, livelihoods, displacement of communities, degradation of health, educational and social infrastructure as well as economic underdevelopment. The Nigerian government’s response to most of these conflicts has mainly been reactionary in the form of military deployments, as against precautionary to prevent or address the root causes of the conflicts. Several studies have shown that at various points of a conflict, conflict regions can benefit from the resources and expertise available outside the government, mainly from the private sector through mechanisms such as corporate social responsibility (CSR) by businesses. The main aim of this study is to examine the role of businesses in peacebuilding in Northern Nigeria through CSR in the last decade. The expected contributions from this will answer research questions, such as the key business motivations to engage in peacebuilding, as well as the degree of influence exerted from various stakeholder groups on the business decision to engage. The methodology of the study adopts a multiple case study of over 120 businesses of various sizes, ranging from small, medium and large-scale. A mixed method enabled the collection of quantitative and qualitative primary data to augment the secondary data. The results indicated that the most important business motivations to engage in peacebuilding were the negative effects of the conflict on economic stability, as well as stakeholder-driven motives. On the other hand, out of the 12 identified stakeholders, micro-, small- and medium-scale enterprises (MSMEs) considered the chief executive officer’s interest to be the most important factor, while large companies rated the government and community pressure as the highest. Overall, the foreign stakeholders scored low on the influence chart for all business types.

Keywords: conflict, corporate social responsibility, peacebuilding, stakeholder

Procedia PDF Downloads 198
1148 Potential Use of Thymus mastichina L. Extract as a Natural Agent against Cheese Spoilage Microorganisms

Authors: Susana P. Dias, Andrea Gomes, Fernanda M. Ferreira, Marta F. Henriques

Abstract:

Thymus mastichina L. is an endogenous medicinal and aromatic plant of the Mediterranean flora. It has been used empirically over the years as a natural preservative in food. Nowadays, the antimicrobial activity of its bioactive compounds, such as essential oils and extracts, has been well recognized. The main purpose of this study was to evaluate the antimicrobial effect of Thymus mastichina ethanolic and aqueous extracts on pathogens and spoilage microorganisms present in cheese during ripening. The effect that the extract type and its concentration has on the development of Staphylococcus aureus, Escherichia coli, and Yarrowia lipolytica populations during 24 hours, was studied 'in vitro' using appropriate culture media. The results achieved evidenced the antimicrobial activity of T. mastichina extracts against the studied strains, and the concentration of 2 mg/mL (w/v) was selected and used directly on the cheese surface during ripening. In addition to the microbiological evaluation in terms of total aerobic bacteria, Enterobacteriaceae, yeasts (particularly Y. lipolytica) and molds, the treated cheeses physicochemical evaluation (humidity, aw, pH, colour, and texture) was also performed. The results were compared with cheeses with natamicyn (positive control) and without any treatment (negative control). The physicochemical evaluation showed that the cheeses treated with ethanolic extract of Thymus mastichina, except the fact that they lead to a faster water loss during ripening, did not present considerable differences when compared to controls. The study revealed an evident antimicrobial power of the extracts, although less effective than the one shown by the use of natamycin. For this reason, the improvement of the extraction methods and the adjustment of the extract concentrations will contribute to the use of T. mastichina as a healthier and eco-friendly alternative to natamycin, that is also more attractive from an economic point of view.

Keywords: antimicrobial activity, cheese, ethanolic extract, Thymus mastichina

Procedia PDF Downloads 164
1147 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 363
1146 Translation of the Verbal Nouns (Masadars) Originating from Three-Letter Verbs in the Holy Quran: Verbal Noun with More than One Pattern (Wazn) As a Model

Authors: Montasser Mohamed Abdelwahab Mahmoud, Abdelwahab Saber Esawi

Abstract:

The language of the Qur’an has a wide range of understanding, reflection, and meanings. Therefore, translation of the Qur’an is inevitably nothing but a translation of the interpretation of the meanings of the Qur’an. It requires special competencies and skills for translators so that they can get close to the intended meaning of the verse of the Qur’an and convey it with precision. In the Arabic language, the verbal noun “AlMasdar” is a very important derivative that properly expresses the verbal idea in the form of a noun. It sounds the same as the base form of the verb with minor changes in the vowel pattern. It is one of the important topics in morphology. The morphologists divided verbal nouns into auditory and analogical, and they stated that that the verbal nouns (Masadars) originating from three-letter verbs are auditory, although they set controls for some of them in order to preserve them. As for the lexicographers, they mentioned the verbal nouns while talking about the lexical materials, and in some cases, their explanation of them exceeded that made by the morphologists, especially in their discussion of structures that the morphologists did not refer to in their books. The verb kafara (disbelief), for example, has three patterns, namely: al-kufْr, al-kufrān, and al-kufūr, and it was mentioned in the Holy Qur’an with different connotations. The verb ṣāma (fasted) with his two patterns (al-ṣaūm and al-ṣīām) was mentioned in the Holy Qur’an while their semantic meaning is different. The problem discussed in this research paper lied in the "linguistic loss" committed by translators when dealing with Islamic religious texts, especially the Qur'an. The study tried to identify the strategy adopted by translators of the Holy Qur'an in translating words that were classified as verbal nouns through analyzing the translation rendered by five translations of the Qur’an into English: Yusuf Ali, Pickthall, Mohsin Khan, Muhammad Sarwar, and Shakir. This study was limited to the verbal nouns in the Quraan that originate from three-letter verbs and have different semantic meanings.

Keywords: pattern, three-letter verbs, translation of the Quran, verbal nouns

Procedia PDF Downloads 142
1145 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)

Authors: Ziv Kassner, Gal Ribak

Abstract:

Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.

Keywords: bio-mechanics, insect flight, target fixation, tracking and interception

Procedia PDF Downloads 138
1144 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.

Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW

Procedia PDF Downloads 174
1143 Evaluate Effects of Different Curing Methods on Compressive Strength, Modulus of Elasticity and Durability of Concrete

Authors: Dhara Shah, Chandrakant Shah

Abstract:

Construction industry utilizes plenty of water in the name of curing. Looking at the present scenario, the days are not so far when all construction industries will have to switch over to an alternative-self curing system, not only to save water for sustainable development of the environment but also to promote indoor and outdoor construction activities even in water scarce areas. At the same time, curing is essential for the development of proper strength and durability. IS 456-2000 recommends a curing period of 7 days for ordinary Portland cement concrete, and 10 to 14 days for concrete prepared using mineral admixtures or blended cements. But, being the last act in the concreting operations, it is often neglected or not fully done. Consequently, the quality of hardened concrete suffers, more so, if the freshly laid concrete gets exposed to the environmental conditions of low humidity, high wind velocity and high ambient temperature. To avoid the adverse effects of neglected or insufficient curing, which is considered a universal phenomenon, concrete technologist and research scientists have come up with curing compounds. Concrete is said to be self-cured, if it is able to retain its water content to perform chemical reaction for the development of its strength. Curing compounds are liquids which are either incorporated in concrete or sprayed directly onto concrete surfaces and which then dry to form a relatively impermeable membrane that retards the loss of moisture from the concrete. They are an efficient and cost-effective means of curing concrete and may be applied to freshly placed concrete or that which has been partially cured by some other means. However, they may affect the bond between concrete and subsequent surface treatments. Special care in the choice of a suitable compound needs to be exercised in such circumstances. Curing compounds are generally formulated from wax emulsions, chlorinated rubbers, synthetic and natural resins, and from PVA emulsions. Their effectiveness varies quite widely, depending on the material and strength of the emulsion.

Keywords: curing methods, self-curing compound, compressive strength, modulus of elasticity, durability

Procedia PDF Downloads 314
1142 Supportive Group Therapy: Its Effects on Depression, Self-Esteem and Quality of Life Among Institutionalized Elderly

Authors: Hannah Patricia S., Louise Margarrette R., Josking Oliver L., Denisse Katrina C., Justine Kali O.

Abstract:

Aims: In the Philippines, there has been an astronomical increase in the population of elderly sent to nursing home facilities which has been studied to induce despair and loss of self-worth. Nurses in institutionalized facilities generally care for the elderly. Although supportive group therapy has been explored to mend this psychological disparity, nursing research has limited published studies about this in the institutionalized setting. Hence, the study determined the effectiveness of supportive group therapy in depression, self-esteem and quality of life among institutionalized elderly. Methodology: A one-group pre-test-post-test design was conducted among 20-purposively selected institutionalized elderly after the Ethics Research Board approval. All eligible participants underwent the supportive group therapy after being subdivided into session groups. The Geriatric Depression Scale, which has a Cronbach’s alpha coefficient of 0.90; the Rosenberg Self-Esteem, which has a Cronbach’s alpha coefficient = 0.84; and the Older People Quality of Life, which has a Cronbach’s alpha coefficient =0.88, were utilized to measure depression, self-esteem, and quality of life, respectively. Descriptive statistics and Repeated Measures-Multivariate Analysis of Variance (RM-MANOVA) analyzed gathered data. Results: Results showed that the supportive group therapy significantly decreased post-test depression scores (F(1,19)=78.69,p=0.0001,partial η2=0.805), significantly improved post-test self-esteem score (F(1,19)=28.07,p=0.0001,partial η2=0.596), and significantly increased the post-test quality of life (F(1,19)=79.73,p=0.0001,partial η2=0.808) after the intervention has been rendered. Conclusion: Supportive group therapy is effective in alleviating depression and in improving self-esteem and quality of life among institutionalized elderly and can be utilized by nursing homes as an intervention to improve the over-all psychosocial status of elderly patients.

Keywords: supportive group therapy, institutionalized elderly, depression, self-esteem, quality of life

Procedia PDF Downloads 413
1141 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 107
1140 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 134
1139 Biogas Production from Lake Bottom Biomass from Forest Management Areas

Authors: Dessie Tegegne Tibebu, Kirsi Mononen, Ari Pappinen

Abstract:

In areas with forest management, agricultural, and industrial activity, sediments and biomass are accumulated in lakes through drainage system, which might be a cause for biodiversity loss and health problems. One possible solution can be utilization of lake bottom biomass and sediments for biogas production. The main objective of this study was to investigate the potentials of lake bottom materials for production of biogas by anaerobic digestion and to study the effect of pretreatment methods for feed materials on biogas yield. In order to study the potentials of biogas production lake bottom materials were collected from two sites, Likokanta and Kutunjärvi lake. Lake bottom materials were mixed with straw-horse manure to produce biogas in a laboratory scale reactor. The results indicated that highest yields of biogas values were observed when feeds were composed of 50% lake bottom materials with 50% straw horse manure mixture-while with above 50% lake bottom materials in the feed biogas production decreased. CH4 content from Likokanta lake materials with straw-horse manure and Kutunjärvi lake materials with straw-horse manure were similar values when feed consisted of 50% lake bottom materials with 50% straw horse manure mixtures. However, feeds with lake bottom materials above 50%, the CH4 concentration started to decrease, impairing gas process. Pretreatment applied on Kutunjärvi lake materials showed a slight negative effect on the biogas production and lowest CH4 concentration throughout the experiment. The average CH4 production (ml g-1 VS) from pretreated Kutunjärvi lake materials with straw horse manure (208.9 ml g-1 VS) and untreated Kutunjärvi lake materials with straw horse manure (182.2 ml g-1 VS) were markedly higher than from Likokanta lake materials with straw horse manure (157.8 ml g-1 VS). According to the experimental results, utilization of 100% lake bottom materials for biogas production is likely to be impaired negatively. In the future, further analyses to improve the biogas yields, assessment of costs and benefits is needed before utilizing lake bottom materials for the production of biogas.

Keywords: anaerobic digestion, biogas, lake bottom materials, sediments, pretreatment

Procedia PDF Downloads 307
1138 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 262
1137 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process

Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei

Abstract:

Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, emulsifying salt, rheology, texture

Procedia PDF Downloads 420
1136 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals

Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao

Abstract:

Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).

Keywords: nanocomposites, polymers, EMI shielding, thermal imaging

Procedia PDF Downloads 85
1135 Wildfire-Related Debris-Flow and Flooding Using 2-D Hydrologic Model

Authors: Cheong Hyeon Oh, Dongho Nam, Byungsik Kim

Abstract:

Due to the recent climate change, flood damage caused by local floods and typhoons has frequently occurred, the incidence rate and intensity of wildfires are greatly increased due to increased temperatures and changes in precipitation patterns. Wildfires cause primary damage, such as loss of forest resources, as well as secondary disasters, such as landslides, floods, and debris flow. In many countries around the world, damage and economic losses from secondary damage are occurring as well as the direct effects of forest fires. Therefore, in this study, the Rainfall-Runoff model(S-RAT) was used for the wildfire affected areas in Gangneung and Goseong, which occurred on April 2019, when the stability of vegetation and soil were destroyed by wildfires. Rainfall data from Typhoon Rusa were used in the S-RAT model, and flood discharge was calculated according to changes in land cover before and after wildfire damage. The results of the calculation showed that flood discharge increased significantly due to changes in land cover, as the increase in flood discharge increases the possibility of the occurrence of the debris flow and the extent of the damage, the debris flow height and range were calculated before and after forest fire using RAMMS. The analysis results showed that the height and extent of damage increased after wildfire, but the result value was underestimated due to the characteristics that using DEM and maximum flood discharge of the RAMMS model. This research was supported by a grant(2017-MOIS31-004) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety (MOIS). This paper work (or document) was financially supported by Ministry of the Interior and Safety as 'Human resoure development Project in Disaster management'.

Keywords: wildfire, debris flow, land cover, rainfall-runoff meodel S-RAT, RAMMS, height

Procedia PDF Downloads 104
1134 Biochar and Food Security in Central Uganda

Authors: Nataliya Apanovich, Mark Wright

Abstract:

Uganda is among the poorest but fastest growing populations in the world. Its annual population growth of 3% puts additional stress through land fragmentation, agricultural intensification, and deforestation on already highly weathered tropical (Ferralsol) soils. All of these factors lead to decreased agricultural yields and consequently diminished food security. The central region of Uganda, Buganda Kingdom, is especially vulnerable in terms of food security as its high population density coupled with mismanagement of natural resources led to gradual loss of its soil and even changes in microclimate. These changes are negatively affecting livelihoods of smallholder farmers who comprise 80% of all population in Uganda. This research focuses on biochar for soil remediation in Masaka District, Uganda. If produced on a small scale from locally sourced materials, biochar can increase the quality of soil in a cost and time effective manner. To assess biochar potential, 151 smallholder farmers were interviewed on the types of crops grown, agricultural residues produced and their use, as well as on attitudes towards biochar use and its production on a small scale. The interviews were conducted in 7 sub-counties, 32 parishes, and 92 villages. The total farmland covered by the study was 606.2 kilometers. Additional information on the state of agricultural development and environmental degradation in the district was solicited from four local government officials via informal interviews. This project has been conducted in collaboration with the international agricultural research institution, Makerere University in Kampala, Uganda. The results of this research can have implications on the way farmers perceive the value of their agricultural residues and what they decide to do with them. The underlying objective is to help smallholders in degraded soils increase their agricultural yields through the use of biochar without diverting the already established uses of agricultural residues to a new soil management practice.

Keywords: agricultural residues, biochar, central Uganda, food security, soil erosion, soil remediation

Procedia PDF Downloads 269
1133 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease

Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms

Abstract:

Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.

Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury

Procedia PDF Downloads 133
1132 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA

Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko

Abstract:

The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.

Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA

Procedia PDF Downloads 493
1131 Formulation and Characterization of NaCS-PDMDAAC Capsules with Immobilized Chlorella vulgaris for Phycoremediation of Palm Oil Mill Effluent

Authors: Quin Emparan, Razif Harun, Dayang R. A. Biak, Rozita Omar, Michael K. Danquah

Abstract:

Cultivation of immobilized microalgae cells is on the rise for biotechnological applications. In this study, cultivation of Chlorella vulgaris was carried out in the form of suspended free-cell and immobilized cells system. NaCS-PDMDAAC capsules were used to immobilize C. vulgaris. Initially, the synthesized NaCS with C. vulgaris culture were prepared at various concentration of 5- 20% (w/v) using a 6% hardening solution (PDMDAAC) to investigate the capsules' gel stability and suitability for microalgae cells growth. Then, the capsules produced from 15% NaCS with C. vulgaris culture were furthered investigated using 5%, 10%, and 15% (w/v) of PDMDAAC solution. The capsules' gel stability was evaluated through dissolution time and loss of uniform spherical shape of capsules, while suitability for microalgae cells growth was evaluated through the optical density of microalgae. In this study, the 15% NaCS-10% PDMDAAC capsules were found to be the most suitable to sustain the capsules' gel stability and microalgae cells growth in MLA. For that reason, the C. vulgaris immobilized in the 15% NaCS-10% PDMDAAC capsules were further characterized using physicochemical analysis in terms of morphological, carbon (C), hydrogen (H) and nitrogen (N), Fourier transform-infrared (FT-IR), scanning electron microscopy-energy dispersive X-ray (SEM-EDX), zeta potential and Brunauer-Emmet-Teller (BET) analyses. The results revealed that the presence of sulfonates in the synthesized NaCS and NaCS-PDMDAAC capsules without and with C. vulgaris proves that cellulose alcohol group was successfully bonded by sulfo group. Besides that, immobilized microalgae cells have a smaller cell size of 6.29 ± 1.09 µm and zeta potential of -11.93 ± 0.91 mV than suspended free-cells microalgae culture. It can be summarized that immobilization of C. vulgaris in the 15% NaCS-10% PDMDAAC capsules are relevant as a bioremediator for wastewater treatment purposes due to its suitable size of pore and capsules as well as structural and compositional properties.

Keywords: biological capsules, immobilized cultivation, microalgae, physico-chemical analysis

Procedia PDF Downloads 153
1130 Finite Element Modelling for the Development of a Planar Ultrasonic Dental Scaler for Prophylactic and Periodontal Care

Authors: Martin Hofmann, Diego Stutzer, Thomas Niederhauser, Juergen Burger

Abstract:

Dental biofilm is the main etiologic factor for caries, periodontal and peri-implant infections. In addition to the risk of tooth loss, periodontitis is also associated with an increased risk of systemic diseases such as atherosclerotic cardiovascular disease and diabetes. For this reason, dental hygienists use ultrasonic scalers for prophylactic and periodontal care of the teeth. However, the current instruments are limited to their dimensions and operating frequencies. The innovative design of a planar ultrasonic transducer introduces a new type of dental scalers. The flat titanium-based design allows the mass to be significantly reduced compared to a conventional screw-mounted Langevin transducer, resulting in a more efficient and controllable scaler. For the development of the novel device, multi-physics finite element analysis was used to simulate and optimise various design concepts. This process was supported by prototyping and electromechanical characterisation. The feasibility and potential of a planar ultrasonic transducer have already been confirmed by our current prototypes, which achieve higher performance compared to commercial devices. Operating at the desired resonance frequency of 28 kHz with a driving voltage of 40 Vrms results in an in-plane tip oscillation with a displacement amplitude of up to 75 μm by having less than 8 % out-of-plane movement and an energy transformation factor of 1.07 μm/mA. In a further step, we will adapt the design to two additional resonance frequencies (20 and 40 kHz) to obtain information about the most suitable mode of operation. In addition to the already integrated characterization methods, we will evaluate the clinical efficiency of the different devices in an in vitro setup with an artificial biofilm pocket model.

Keywords: ultrasonic instrumentation, ultrasonic scaling, piezoelectric transducer, finite element simulation, dental biofilm, dental calculus

Procedia PDF Downloads 103
1129 Impact of Ventilation Systems on Indoor Air Quality in Swedish Primary School Classrooms

Authors: Sarka Langer, Despoina Teli, Blanka Cabovska, Jan-Olof Dalenbäck, Lars Ekberg, Gabriel Bekö, Pawel Wargocki, Natalia Giraldo Vasquez

Abstract:

The aim of the study was to investigate the impact of various ventilation systems on indoor climate, air pollution, chemistry, and perception. Measurements of thermal environment and indoor air quality were performed in 45 primary school classrooms in Gothenburg, Sweden. The classrooms were grouped into three categories according to their ventilation system: category A) natural or exhaust ventilation or automated window opening; category B) balanced mechanical ventilation systems with constant air volume (CAV); and category C) balanced mechanical ventilation systems with variable air volume (VAV). A questionnaire survey about indoor air quality, perception of temperature, odour, noise and light, and sensation of well-being, alertness focus, etc., was distributed among the 10-12 years old children attending the classrooms. The results (medians) showed statistically significant differences between ventilation category A and categories B and C, but not between categories B and C in air change rates, median concentrations of carbon dioxide, individual volatile organic compounds formaldehyde and isoprene, in-door-to-outdoor ozone ratios and products of ozonolysis of squalene, a constituent of human skin oils, 6-methyl-5-hepten-2-one and decanal. Median ozone concentration, ozone loss -a difference between outdoor and indoor ozone concentrations- were different only between categories A and C. Median concentration of total VOCs and a perception index based on survey responses on perceptions and sensations indoors were not significantly different. In conclusion, ventilation systems have an impact on air change rates, indoor air quality, and chemistry, but the Swedish primary school children’s perception did not differ with the ventilation systems of the classrooms.

Keywords: indoor air pollutants, indoor climate, indoor chemistry, air change rate, perception

Procedia PDF Downloads 46
1128 The Effect of Traffic Load on the Maximum Response of a Cable-Stayed Bridge under Blast Loads

Authors: S. K. Hashemi, M. A. Bradford, H. R. Valipour

Abstract:

The Recent collapse of bridges has raised the awareness about safety and robustness of bridges subjected to extreme loading scenarios such as intentional/unintentional blast loads. The air blast generated by the explosion of bombs or fuel tankers leads to high-magnitude short-duration loading scenarios that can cause severe structural damage and loss of critical structural members. Hence, more attentions need to put towards bridge structures to develop guidelines to increase the resistance of such structures against the probable blast. Recent advancements in numerical methods have brought about the viable and cost effective facilities to simulate complicated blast scenarios and subsequently provide useful reference for safeguarding design of critical infrastructures. In the previous studies common bridge responses to blast load, the traffic load is sometimes not included in the analysis. Including traffic load will increase the axial compression in bridge piers especially when the axial load is relatively small. Traffic load also can reduce the uplift of girders and deck when the bridge experiences under deck explosion. For more complicated structures like cable-stayed or suspension bridges, however, the effect of traffic loads can be completely different. The tension in the cables increase and progressive collapse is likely to happen while traffic loads exist. Accordingly, this study is an attempt to simulate the effect of traffic load cases on the maximum local and global response of an entire cable-stayed bridge subjected to blast loadings using LS-DYNA explicit finite element code. The blast loads ranged from small to large explosion placed at different positions above the deck. Furthermore, the variation of the traffic load factor in the load combination and its effect on the dynamic response of the bridge under blast load is investigated.

Keywords: blast, cable-stayed bridge, LS-DYNA, numerical, traffic load

Procedia PDF Downloads 319
1127 Approaches for Minimizing Radioactive Tritium and ¹⁴C in Advanced High Temperature Gas-Cooled Reactors

Authors: Longkui Zhu, Zhengcao Li

Abstract:

High temperature gas-cooled reactors (HTGRs) are considered as one of the next-generation advanced nuclear reactors, in which porous nuclear graphite is used as neutron moderators, reflectors, structure materials, and cooled by inert helium. Radioactive tritium and ¹⁴C are generated in terms of reactions of thermal neutrons and ⁶Li, ¹⁴N, ¹⁰B impurely within nuclear graphite and the coolant during HTGRs operation. Currently, hydrogen and nitrogen diffusion behavior together with nuclear graphite microstructure evolution were investigated to minimize the radioactive waste release, using thermogravimetric analysis, X-ray computed tomography, the BET and mercury standard porosimetry methods. It is found that the peak value of graphite weight loss emerged at 573-673 K owing to nitrogen diffusion from graphite pores to outside when the system was subjected to vacuum. Macropore volume became larger while porosity for mesopores was smaller with temperature ranging from ambient temperature to 1073 K, which was primarily induced by coalescence of the subscale pores. It is suggested that the porous nuclear graphite should be first subjected to vacuum at 573-673 K to minimize the nitrogen and the radioactive 14°C before operation in HTGRs. Then, results on hydrogen diffusion show that the diffusible hydrogen and tritium could permeate into the coolant with diffusion coefficients of > 0.5 × 10⁻⁴ cm²·s⁻¹ at 50 bar. As a consequence, the freshly-generated diffusible tritium could release quickly to outside once formed, and an effective approach for minimizing the amount of radioactive tritium is to make the impurity contents extremely low in nuclear graphite and the coolant. Besides, both two- and three-dimensional observations indicate that macro and mesopore volume along with total porosity decreased with temperature at 50 bar on account of synergistic effects of applied compression strain, sharpened pore morphology, and non-uniform temperature distribution.

Keywords: advanced high temperature gas-cooled reactor, hydrogen and nitrogen diffusion, microstructure evolution, nuclear graphite, radioactive waste management

Procedia PDF Downloads 298