Search results for: computational techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8406

Search results for: computational techniques

1356 Cryopreservation of Ring-Necked Pheasant (Phasianus colchicus) Semen for Establishing Cryobank

Authors: Rida Pervaiz, Bushra Allah Rakha, Muhammad Sajjad Ansari, Shamim Akhter, Kainat Waseem, Sumiyyah Zuha, Tooba Javed

Abstract:

Ring-necked pheasant (Phasianus colchicus) belongs to order Galliformes and family Phasianidae. It has been recognized as the most hunted bird due to its attractive colorful appearance and meat. Loss of habitat and hunting pressure has caused population fluctuations in the native range. Under these circumstances, this species can be conserved by employing ex-situ in vitro conservation techniques. Captive breeding, in combination with semen cryobanking is the most appropriate option to conserve/propagate this species without deteriorating the genetic diversity. Cryopreservation protocols of adequate efficiency are necessary to establish semen cryobanking for a species. Therefore, present study was designed to devise an efficient extender for cryopreservation of ring-necked pheasant semen. For this purpose, a range of extenders (Beltsville Poultry, red fowl, Lake, EK, Tselutin Poultry and Chicken semen extenders) were evaluated for cryopreservation of ring-necked pheasant semen. Semen collected from 10 cocks, diluted in the Beltsville Poultry (BPSE), Red Fowl (RFE), Lake (LE), EK (EKE), Tselutin Poultry (TPE) and Chicken Semen (CSE) extenders and cryopreserved. Glycerol (10%) was added to semen at 4°C, equilibrated for 10 min, filled in 0.5 mL French straws, kept over liquid nitrogen vapors for 10 min, cryopreserved in LN2 and stored. Sperm motility (%), viability (%), live/dead ratio (%), plasma membrane (%) and DNA Integrity (%) were evaluated at post-dilution, post-cooling, post-equilibration and post-thawing stage of cryopreservation. Sperm motility (83.8 ± 3.1; 81.3 ± 3.8; 73.8 ± 2.4; 62.5 ± 1.4), viability (79.0 ± 1.7; 75.5 ± 1.6; 69.5 ± 2.3; 65.5 ± 2.4), live/dead ratio (80.5 ± 5.7; 77.3 ± 4.9; 76.0 ± 2.7; 68.3 ± 2.3), plasma membrane (74.5 ± 2.9; 73.8 ± 3.4; 71.3 ± 2.3; 75.0 ± 3.4) and DNA integrity (78.3 ± 1.7; 73.0 ± 1.2; 68.0 ± 2.0; 63.0 ± 2.5) at all four stages of cryopreservation were recorded higher (P < 0.05) in red fowl extender compared to all experimental extenders. It is concluded that red fowl extender is the best extender for cryopreservation of ring-necked pheasant semen and can be used in establishing cryobank for ex situ conservation.

Keywords: ring-necked pheasant; extenders; cryopreservation; semen quality; DNA integrity

Procedia PDF Downloads 137
1355 Synthesis and Characterization of Polycaprolactone for the Delivery of Rifampicin

Authors: Evelyn Osehontue Uroro, Richard Bright, Jing Yang Quek, Krasimir Vasilev

Abstract:

Bacterial infections have been a challenge both in the public and private sectors. The colonization of bacteria often occurs in medical devices such as catheters, heart valves, respirators, and orthopaedic implants. When biomedical devices are inserted into patients, the deposition of macromolecules such as fibrinogen and immunoglobin on their surfaces makes it easier for them to be prone to bacteria colonization leading to the formation of biofilms. The formation of biofilms on medical devices has led to a series of device-related infections which are usually difficult to eradicate and sometimes cause the death of patients. These infections require surgical replacements along with prolonged antibiotic therapy, which would incur additional health costs. It is, therefore, necessary to prevent device-related infections by inhibiting the formation of biofilms using intelligent technology. Antibiotic resistance of bacteria is also a major threat due to overuse. Different antimicrobial agents have been applied to microbial infections. They include conventional antibiotics like rifampicin. The use of conventional antibiotics like rifampicin has raised concerns as some have been found to have hepatic and nephrotoxic effects due to overuse. Hence, there is also a need for proper delivery of these antibiotics. Different techniques have been developed to encapsulate and slowly release antimicrobial agents, thus reducing host cytotoxicity. Examples of delivery systems are solid lipid nanoparticles, hydrogels, micelles, and polymeric nanoparticles. The different ways by which drugs are released from polymeric nanoparticles include diffusion-based release, elution-based release, and chemical/stimuli-responsive release. Polymeric nanoparticles have gained a lot of research interest as they are basically made from biodegradable polymers. An example of such a biodegradable polymer is polycaprolactone (PCL). PCL degrades slowly by hydrolysis but is often sensitive and responsive to stimuli like enzymes to release encapsulants for antimicrobial therapy. This study presents the synthesis of PCL nanoparticles loaded with rifampicin and the on-demand release of rifampicin for treating staphylococcus aureus infections.

Keywords: enzyme, Staphylococcus aureus, PCL, rifampicin

Procedia PDF Downloads 120
1354 Study of the Montmorillonite Effect on PET/Clay and PEN/Clay Nanocomposites

Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour

Abstract:

Nanocomposite polymer / clay are relatively important area of research. These reinforced plastics have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters ie polyethylene terephthalate (PET) and polyethylene naphthalate (PEN).The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/ poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition. This was evidence that both PET/PEN and nPET/nPEN blends are compatible in the entire range of compositions. In addition, the nPET/nPEN blends showed lower Tc and higher Tm values than the corresponding neat PET/PEN blends. In conclusion, the results obtained indicate that n(PET/PEN) blends are different from the pure ones in nanostructure and physical behavior.

Keywords: blends, exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing

Procedia PDF Downloads 293
1353 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 640
1352 Assessing the Impact of Low Carbon Technology Integration on Electricity Distribution Networks: Advancing towards Local Area Energy Planning

Authors: Javier Sandoval Bustamante, Pardis Sheikhzadeh, Vijayanarasimha Hindupur Pakka

Abstract:

In the pursuit of achieving net-zero carbon emissions, the integration of low carbon technologies into electricity distribution networks is paramount. This paper delves into the critical assessment of how the integration of low carbon technologies, such as heat pumps, electric vehicle chargers, and photovoltaic systems, impacts the infrastructure and operation of electricity distribution networks. The study employs rigorous methodologies, including power flow analysis and headroom analysis, to evaluate the feasibility and implications of integrating these technologies into existing distribution systems. Furthermore, the research utilizes Local Area Energy Planning (LAEP) methodologies to guide local authorities and distribution network operators in formulating effective plans to meet regional and national decarbonization objectives. Geospatial analysis techniques, coupled with building physics and electric energy systems modeling, are employed to develop geographic datasets aimed at informing the deployment of low carbon technologies at the local level. Drawing upon insights from the Local Energy Net Zero Accelerator (LENZA) project, a comprehensive case study illustrates the practical application of these methodologies in assessing the rollout potential of LCTs. The findings not only shed light on the technical feasibility of integrating low carbon technologies but also provide valuable insights into the broader transition towards a sustainable and electrified energy future. This paper contributes to the advancement of knowledge in power electrical engineering by providing empirical evidence and methodologies to support the integration of low carbon technologies into electricity distribution networks. The insights gained are instrumental for policymakers, utility companies, and stakeholders involved in navigating the complex challenges of energy transition and achieving long-term sustainability goals.

Keywords: energy planning, energy systems, digital twins, power flow analysis, headroom analysis

Procedia PDF Downloads 50
1351 A Controlled-Release Nanofertilizer Improves Tomato Growth and Minimizes Nitrogen Consumption

Authors: Mohamed I. D. Helal, Mohamed M. El-Mogy, Hassan A. Khater, Muhammad A. Fathy, Fatma E. Ibrahim, Yuncong C. Li, Zhaohui Tong, Karima F. Abdelgawad

Abstract:

Minimizing the consumption of agrochemicals, particularly nitrogen, is the ultimate goal for achieving sustainable agricultural production with low cost and high economic and environmental returns. The use of biopolymers instead of petroleum-based synthetic polymers for CRFs can significantly improve the sustainability of crop production since biopolymers are biodegradable and not harmful to soil quality. Lignin is one of the most abundant biopolymers that naturally exist. In this study, controlled-release fertilizers were developed using a biobased nanocomposite of lignin and bentonite clay mineral as a coating material for urea to increase nitrogen use efficiency. Five types of controlled-release urea (CRU) were prepared using two ratios of modified bentonite as well as techniques. The efficiency of the five controlled-release nano-urea (CRU) fertilizers in improving the growth of tomato plants was studied under field conditions. The CRU was applied to the tomato plants at three N levels representing 100, 50, and 25% of the recommended dose of conventional urea. The results showed that all CRU treatments at the three N levels significantly enhanced plant growth parameters, including plant height, number of leaves, fresh weight, and dry weight, compared to the control. Additionally, most CRU fertilizers increased total yield and fruit characteristics (weight, length, and diameter) compared to the control. Additionally, marketable yield was improved by CRU fertilizers. Fruit firmness and acidity of CRU treatments at 25 and 50% N levels were much higher than both the 100% CRU treatment and the control. The vitamin C values of all CRU treatments were lower than the control. Nitrogen uptake efficiencies (NUpE) of CRU treatments were 47–88%, which is significantly higher than that of the control (33%). In conclusion, all CRU treatments at an N level of 25% of the recommended dose showed better plant growth, yield, and fruit quality of tomatoes than the conventional fertilizer.

Keywords: nitrogen use efficiency, quality, urea, nano particles, ecofriendly

Procedia PDF Downloads 74
1350 Neural Networks Models for Measuring Hotel Users Satisfaction

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.

Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring

Procedia PDF Downloads 132
1349 Identifying the Factors that Influence Water-Use Efficiency in Agriculture: Case Study in a Spanish Semi-Arid Region

Authors: Laura Piedra-Muñoz, Ángeles Godoy-Durán, Emilio Galdeano-Gómez, Juan C. Pérez-Mesa

Abstract:

The current agricultural system in some arid and semi-arid areas is not sustainable in the long term. In southeast Spain, groundwater is the main water source and is overexploited, while alternatives like desalination are still limited. The Water Plan for the Mediterranean Basins 2015-2020 indicates a global deficit of 73.42 hm3 and an overexploitation of the aquifers of 205.58hm3. In order to solve this serious problem, two major actions can be taken: increasing available water, and/or improving the efficiency of its use. This study focuses on the latter. The main aim of this study is to present the major factors related to water usage efficiency in farming. It focuses on Almería province, southeast Spain, one of the most arid areas of the country, and in particular on family farms as the main direct managers of water use in this zone. Many of these farms are among the most water efficient in Spanish agriculture, but this efficiency is not generalized throughout the sector. This work conducts a comprehensive assessment of water performance in this area, using on-farm water-use, structural, socio-economic and environmental information. Two statistical techniques are used: descriptive analysis and cluster analysis. Thus, two groups are identified: the least and the most efficient farms regarding water usage. By analyzing both the common characteristics within each group and the differences between the groups with a one-way ANOVA analysis, several conclusions can be reached. The main differences between the two clusters center on the extent to which innovation and new technologies are used in irrigation. The most water efficient farms are characterized by more educated farmers, a greater degree of innovation, new irrigation technology, specialized production and awareness of water issues and environmental sustainability. The research shows that better practices and policies can have a substantial impact on achieving a more sustainable and efficient use of water. The findings of this study can be extended to farms in similar arid and semi-arid areas and contribute to foster appropriate policies to improve the efficiency of water usage in the agricultural sector.

Keywords: cluster analysis, family farms, Spain, water-use efficiency

Procedia PDF Downloads 284
1348 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 189
1347 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 157
1346 Cyber-Social Networks in Preventing Terrorism: Topological Scope

Authors: Alessandra Rossodivita, Alexei Tikhomirov, Andrey Trufanov, Nikolay Kinash, Olga Berestneva, Svetlana Nikitina, Fabio Casati, Alessandro Visconti, Tommaso Saporito

Abstract:

It is well known that world and national societies are exposed to diverse threats: anthropogenic, technological, and natural. Anthropogenic ones are of greater risks and, thus, attract special interest to researchers within wide spectrum of disciplines in efforts to lower the pertinent risks. Some researchers showed by means of multilayered, complex network models how media promotes the prevention of disease spread. To go further, not only are mass-media sources included in scope the paper suggests but also personificated social bots (socbots) linked according to reflexive theory. The novel scope considers information spread over conscious and unconscious agents while counteracting both natural and man-made threats, i.e., infections and terrorist hazards. Contrary to numerous publications on misinformation disseminated by ‘bad’ bots within social networks, this study focuses on ‘good’ bots, which should be mobilized to counter the former ones. These social bots deployed mixture with real social actors that are engaged in concerted actions at spreading, receiving and analyzing information. All the contemporary complex network platforms (multiplexes, interdependent networks, combined stem networks et al.) are comprised to describe and test socbots activities within competing information sharing tools, namely mass-media hubs, social networks, messengers, and e-mail at all phases of disasters. The scope and concomitant techniques present evidence that embedding such socbots into information sharing process crucially change the network topology of actor interactions. The change might improve or impair robustness of social network environment: it depends on who and how controls the socbots. It is demonstrated that the topological approach elucidates techno-social processes within the field and outline the roadmap to a safer world.

Keywords: complex network platform, counterterrorism, information sharing topology, social bots

Procedia PDF Downloads 160
1345 The Cost of Healthcare among Malaysian Community-Dwelling Elderly with Dementia

Authors: Roshanim Koris, Norashidah Mohamed Nor, Sharifah Azizah Haron, Normaz Wana Ismail, Syed Mohamed Aljunid Syed Junid, Amrizal Muhammad Nur, Asrul Akmal Shafie, Suraya Yusuff, Namaitijiang Maimaiti

Abstract:

An ageing population has huge implications for virtually every aspect of Malaysian societies. The elderly consume a greater volume of healthcare facilities not because they are older, but because of they are sick. The chronic comorbidities and deterioration of cognitive ability would lead the elderly’s health to become worst. This study aims to provide a comprehensive estimate of the direct and indirect costs of health care used in a nationally representative sample of community-dwelling elderly with dementia and as well as the determinants of healthcare cost. A survey using multi-stage random sampling techniques recruited a final sample of 2274 elderly people (60 years and above) in the state of Johor, Perak, Selangor and Kelantan. Mini Mental State Examination (MMSE) score was used to measure the cognitive capability among the elderly. Only the elderly with a score less than 19 marks were selected for further analysis and were classified as dementia. By using a two-part model findings also indicate household income and education level are variables that strongly significantly influence the healthcare cost among elderly with dementia. A number of visits and admission are also significantly affect healthcare expenditure. The comorbidity that highly influences healthcare cost is cancer and seeking the treatment in private facilities is also significantly affected the healthcare cost among the demented elderly. The level of dementia severity is not significant in determining the cost. This study is expected to attract the government's attention and act as a wake-up call for them to be more concerned about the elderly who are at high risk of having chronic comorbidities and cognitive problems by providing more appropriate health and social care facilities. The comorbidities are one of the factor that could cause dementia among elderly. It is hoped that this study will promote the issues of dementia as a priority in public health and social care in Malaysia.

Keywords: ageing population, dementia, elderly, healthcare cost, healthcare utiliztion

Procedia PDF Downloads 201
1344 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing

Authors: Ahmed E. Hodaib, Muhammed A. Hashem

Abstract:

High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.

Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing

Procedia PDF Downloads 328
1343 Ancient Egyptian Industry Technology of Canopic Jars, Analytical Study and Conservation Processes of Limestone Canopic Jar

Authors: Abd El Rahman Mohamed

Abstract:

Canopic jars made by the ancient Egyptians from different materials were used to preserve the viscera during the mummification process. The canopic jar studied here dates back to the Late Period (712-332 BC). It is found in the Grand Egyptian Museum (GEM), Giza, Egypt. This jar was carved from limestone and covered with a monkey head lid with painted eyes and ears with red pigment and surrounded with black pigment. The jar contains bandages of textile containing mummy viscera with resin and black resin blocks. The canopic jars were made using the sculpting tools that were used by the ancient Egyptians, such as metal chisels (made of copper) and hammers and emptying the mass of the jar from the inside using a tool invented by the ancient Egyptians, which called the emptying drill. This study also aims to use analytical techniques to identify the components of the jar, its contents, pigments, and previous restoration materials and to understand its deterioration aspects. Visual assessment, isolation and identification of fungi, optical microscopy (OM), scanning electron microscopy (SEM), X-ray fluorescence spectroscopy (XRF), X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FTIR) were used in our study. The jar showed different signs of deterioration, such as dust, dirt, stains, scratches, classifications, missing parts, and breaks; previous conservation materials include using iron wire, completion mortar and an adhesive for assembly. The results revealed that the jar was carved from Dolomite Limestone, red Hematite pigment, Mastic resin, and Linen textile bandages. The previous adhesive was Animal Glue and used Gypsum for the previous completion. The most dominant Microbial infection on the jar was found in the fungi of (Penicillium waksmanii), (Nigrospora sphaerica), (Actinomycetes sp) and (Spore-Forming Gram-Positive Bacilli). Conservation procedures have been applied with high accuracy to conserve the jar, including mechanical and chemical cleaning, re-assembling, completion and consolidation.

Keywords: Canopic jar, Consolidation, Mummification, Resin, Viscera.

Procedia PDF Downloads 66
1342 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery

Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr

Abstract:

The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.

Keywords: bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability

Procedia PDF Downloads 277
1341 Chemical Characterization, Crystallography and Acute Toxicity Evaluation of Two Boronic-Carbohydrate Adducts

Authors: Héctor González Espinosa, Ricardo Ivan Cordova Chávez, Alejandra Contreras Ramos, Itzia Irene Padilla Martínez, José Guadalupe Trujillo Ferrara, Marvin Antonio Soriano Ursúa

Abstract:

Boronic acids are able to create diester bonds with carbohydrates because of their hydroxyl groups; in nature, there are some organoborates with these characteristics, such as the calcium fructoborate, formed by the union of two fructose molecules and a boron atom, synthesized by plants. In addition, it has been observed that, in animal cells only the compounds with cis-diol functional groups are capable of linking to boric or boronic acids. The formation of these organoboron compounds could impair the physical and chemical properties of the precursors, even their acute toxicity. In this project, two carbohydrate-derived boron-containing compounds from D-fructose and D-arabinose and phenylboronic acid are analyzed by different spectroscopy techniques such as Raman, Infrared with Fourier Transform Infrared (FT-IR), Nuclear Magnetic Resonance (NMR) and X-ray diffraction crystallography to describe their chemical characteristics. Also, an acute toxicity test was performed to determine their LD50 using the Lorke’s method. It was confirmed by multiple spectra the formation of the adducts by the generation of the diester bonds with a β-D-pyranose of fructose and arabinose. The most prominent findings were the presence of signals corresponding to the formation of new bonds, like the stretching of B-O bonds, or the absence of signals of functional groups like the hydroxyls presented in the reagents used for the synthesis of the adducts. The NMR spectra yielded information about the stereoselectivity in the synthesis reaction, observed by the interaction of the protons and their vicinal atoms in the anomeric and second position carbons; but also, the absence of a racemic mix by the finding of just one signal in the range for the anomeric carbon in the 13C NMR spectra of both adducts. The acute toxicity tests by the Lorke’s method showed that the LD50 value for both compounds is 1265 mg/kg. Those results let us to propose these adducts as highly safe agents for further biological evaluation with medical purposes.

Keywords: acute toxicity, adduct, boron, carbohydrate, diester bond

Procedia PDF Downloads 58
1340 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 506
1339 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 43
1338 Biological Hazards and Laboratory inflicted Infections in Sub-Saharan Africa

Authors: Godfrey Muiya Mukala

Abstract:

This research looks at an array of fields in Sub-Saharan Africa comprising agriculture, food enterprises, medicine, organisms genetically modified, microbiology, and nanotechnology that can be gained from biotechnological research and development. Findings into dangerous organisms, mainly bacterial germs, rickettsia, fungi, parasites, or organisms that are genetically engineered, have immensely posed questions attributed to the biological danger they bring forth to human beings and the environment because of their uncertainties. In addition, the recurrence of previously managed diseases or the inception of new diseases are connected to biosafety challenges, especially in rural set-ups in low and middle-income countries. Notably, biotechnology laboratories are required to adopt biosafety measures to protect their workforce, community, environment, and ecosystem from unforeseen materials and organisms. Sensitization and inclusion of educational frameworks for laboratory workers are essential to acquiring a solid knowledge of harmful biological agents. This is in addition to human pathogenicity, susceptibility, and epidemiology to the biological data used in research and development. This article reviews and analyzes research intending to identify the proper implementation of universally accepted practices in laboratory safety and biological hazards. This research identifies ideal microbiological methods, adequate containment equipment, sufficient resources, safety barriers, specific training, and education of the laboratory workforce to decrease and contain biological hazards. Subsequently, knowledge of standardized microbiological techniques and processes, in addition to the employment of containment facilities, protective barriers, and equipment, is far-reaching in preventing occupational infections. Similarly, reduction of risks and prevention may be attained by training, education, and research on biohazards, pathogenicity, and epidemiology of the relevant microorganisms. In this technique, medical professionals in rural setups may adopt the knowledge acquired from the past to project possible concerns in the future.

Keywords: sub-saharan africa, biotechnology, laboratory, infections, health

Procedia PDF Downloads 74
1337 The Teacher’s Role in Generating and Maintaining the Motivation of Adult Learners of English: A Mixed Methods Study in Hungarian Corporate Contexts

Authors: Csaba Kalman

Abstract:

In spite of the existence of numerous second language (L2) motivation theories, the teacher’s role in motivating learners has remained an under-researched niche to this day. If we narrow down our focus on the teacher’s role on motivating adult learners of English in an English as a Foreign Language (EFL) context in corporate environments, empirical research is practically non-existent. This study fills the above research niche by exploring the most motivating aspects of the teacher’s personality, behaviour, and teaching practices that affect adult learners’ L2 motivation in corporate contexts in Hungary. The study was conducted in a wide range of industries in 18 organisations that employ over 250 people in Hungary. In order to triangulate the research, 21 human resources managers, 18 language teachers, and 466 adult learners of English were involved in the investigation by participating in interview studies, and quantitative questionnaire studies that measured ten scales related to the teacher’s role, as well as two criterion measure scales of intrinsic and extrinsic motivation. The qualitative data were analysed using a template organising style, while descriptive, inferential statistics, as well as multivariate statistical techniques, such as correlation and regression analyses, were used for analysing the quantitative data. The results showed that certain aspects of the teacher’s personality (thoroughness, enthusiasm, credibility, and flexibility), as well as preparedness, incorporating English for Specific Purposes (ESP) in the syllabus, and focusing on the present, proved to be the most salient aspects of the teacher’s motivating influence. The regression analyses conducted with the criterion measure scales revealed that 22% of the variance in learners’ intrinsic motivation could be explained by the teacher’s preparedness and appearance, and 23% of the variance in learners’ extrinsic motivation could be attributed to the teacher’s personal branding and incorporating ESP in the syllabus. The findings confirm the pivotal role teachers play in motivating L2 learners independent of the context they teach in; and, at the same time, call for further research so that we can better conceptualise the motivating influence of L2 teachers.

Keywords: adult learners, corporate contexts, motivation, teacher’s role

Procedia PDF Downloads 100
1336 Polymer Flooding: Chemical Enhanced Oil Recovery Technique

Authors: Abhinav Bajpayee, Shubham Damke, Rupal Ranjan, Neha Bharti

Abstract:

Polymer flooding is a dramatic improvement in water flooding and quickly becoming one of the EOR technologies. Used for improving oil recovery. With the increasing energy demand and depleting oil reserves EOR techniques are becoming increasingly significant .Since most oil fields have already begun water flooding, chemical EOR technique can be implemented by using fewer resources than any other EOR technique. Polymer helps in increasing the viscosity of injected water thus reducing water mobility and hence achieves a more stable displacement .Polymer flooding helps in increasing the injection viscosity as has been revealed through field experience. While the injection of a polymer solution improves reservoir conformance the beneficial effect ceases as soon as one attempts to push the polymer solution with water. It is most commonly applied technique because of its higher success rate. In polymer flooding, a water-soluble polymer such as Polyacrylamide is added to the water in the water flood. This increases the viscosity of the water to that of a gel making the oil and water greatly improving the efficiency of the water flood. It also improves the vertical and areal sweep efficiency as a consequence of improving the water/oil mobility ratio. Polymer flooding plays an important role in oil exploitation, but around 60 million ton of wastewater is produced per day with oil extraction together. Therefore the treatment and reuse of wastewater becomes significant which can be carried out by electro dialysis technology. This treatment technology can not only decrease environmental pollution, but also achieve closed-circuit of polymer flooding wastewater during crude oil extraction. There are three potential ways in which a polymer flood can make the oil recovery process more efficient: (1) through the effects of polymers on fractional flow, (2) by decreasing the water/oil mobility ratio, and (3) by diverting injected water from zones that have been swept. It has also been suggested that the viscoelastic behavior of polymers can improve displacement efficiency Polymer flooding may also have an economic impact because less water is injected and produced compared with water flooding. In future we need to focus on developing polymers that can be used in reservoirs of high temperature and high salinity, applying polymer flooding in different reservoir conditions and also combine polymer with other processes (e.g., surfactant/ polymer flooding).

Keywords: fractional flow, polymer, viscosity, water/oil mobility ratio

Procedia PDF Downloads 395
1335 Learning Fashion Construction and Manufacturing Methods from the Past: Cultural History and Genealogy at the Middle Tennessee State University Historic Clothing Collection

Authors: Teresa B. King

Abstract:

In the millennial age, with more students desiring a fashion major yet fewer having sewing and manufacturing knowledge, this increases demand on academicians to adequately educate. While fashion museums have a prominent place for historical preservation, the need for apparel education via working collections of handmade or mass manufactured apparel is lacking in most universities in the United States, especially in the Southern region. Created in 1988, Middle Tennessee State University’s historic clothing collection provides opportunities to study apparel construction methods throughout history, to compare and apply to today’s construction and manufacturing methods, as well as to learn the cyclical nature/importance of historic styles on current and upcoming fashion. In 2019, a class exercise experiment was implemented for which students researched their family genealogy using Ancestry.com, identified the oldest visual media (photographs, etc.) available, and analyzed the garment represented in said media. The student then located a comparable garment in the historic collection and evaluated the construction methods of the ancestor’s time period. A class 'fashion' genealogy tree was created and mounted for public viewing/education. Results of this exercise indicated that student learning increased due to the 'personal/familial connection' as it triggered more interest in historical garments as related to the student’s own personal culture. Students better identified garments regarding the historical time period, fiber content, fabric, and construction methods utilized, thus increasing learning and retention. Students also developed increased learning and recognition of custom construction methods versus current mass manufacturing techniques, which impact today’s fashion industry. A longitudinal effort will continue with the growth of the historic collection and as students continue to utilize the historic clothing collection.

Keywords: ancestry, clothing history, fashion history, genealogy, historic fashion museum collection

Procedia PDF Downloads 132
1334 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria

Authors: Wale Agbaje

Abstract:

The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.

Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets

Procedia PDF Downloads 158
1333 Beyond the Tragedy of Absence: Vizenor's Comedy of Native Presence

Authors: Mahdi Sepehrmanesh

Abstract:

This essay explores Gerald Vizenor's innovative concepts of the tragedy of absence and the comedy of presence as frameworks for understanding and challenging dominant narratives about Native American identity and history. Vizenor's work critiques the notion of irrevocable cultural loss and rigid definitions of Indigenous identity based on blood quantum and stereotypical practices. Through subversive humor, trickster figures, and storytelling, Vizenor asserts the active presence and continuance of Native peoples, advocating for a dynamic, self-determined understanding of Native identity. The essay examines Vizenor's use of postmodern techniques, including his engagement with simulation and hyperreality, to disrupt colonial discourses and create new spaces for Indigenous expression. It explores the concept of "crossblood" identities as a means of resisting essentialist notions of Native authenticity and embracing the complexities of contemporary Indigenous experiences. Vizenor's ideas of survivance and transmotion are analyzed as strategies for cultural resilience and adaptation in the face of ongoing colonial pressures. The interplay between absence and presence in Vizenor's work is discussed, particularly through the lens of shadow survivance and the power of storytelling. The essay also delves into Vizenor's critique of terminal creed and his promotion of natural reason as an alternative epistemology to Western rationalism. While acknowledging the significant influence of Vizenor's work on Native American literature and theory, the essay also addresses critiques of his approach, including concerns about the accessibility of his writing and its political effectiveness. Despite these debates, the essay argues that Vizenor's concepts offer a powerful vision of Indigenous futurity that is rooted in tradition yet open to change, inspiring hope and agency in the face of oppression. By examining Vizenor's multifaceted approach to Native American identity and presence, this essay contributes to ongoing discussions about Indigenous representation, cultural continuity, and resistance to colonial narratives in literature and beyond.

Keywords: gerald vizenor, identity native american literature, survivance, trickster discourse, identity

Procedia PDF Downloads 28
1332 In situ Investigation of PbI₂ Precursor Film Formation and Its Subsequent Conversion to Mixed Cation Perovskite

Authors: Dounya Barrit, Ming-Chun Tang, Hoang Dang, Kai Wang, Detlef-M. Smilgies, Aram Amassian

Abstract:

Several deposition methods have been developed for perovskite film preparation. The one-step spin-coating process has emerged as a more popular option thanks to its ability to produce films of different compositions, including mixed cation and mixed halide perovskites, which can stabilize the perovskite phase and produce phases with desired band gap. The two-step method, however, is not understood in great detail. There is a significant need and opportunity to adopt the two-step process toward mixed cation and mixed halide perovskites, but this requires deeper understanding of the two-step conversion process, for instance when using different cations and mixtures thereof, to produce high-quality perovskite films with uniform composition. In this work, we demonstrate using in situ investigations that the conversion of PbI₂ to perovskite is largely dictated by the state of the PbI₂ precursor film in terms of its solvated state. Using time-resolved grazing incidence wide-angle X-Ray scattering (GIWAXS) measurements during spin coating of PbI₂ from a DMF (Dimethylformamide) solution we show the film formation to be a sol-gel process involving three PbI₂-DMF solvate complexes: disordered precursor (P₀), ordered precursor (P₁, P₂) prior to PbI₂ formation at room temperature after 5 minutes. The ordered solvates are highly metastable and eventually disappear, but we show that performing conversion from P₀, P₁, P₂ or PbI₂ can lead to very different conversion behaviors and outcomes. We compare conversion behaviors by using MAI (Methylammonium iodide), FAI (Formamidinium Iodide) and mixtures of these cations, and show that conversion can occur spontaneously and quite rapidly at room temperature without requiring further thermal annealing. We confirm this by demonstrating improvements in the morphology and microstructure of the resulting perovskite films, using techniques such as in situ quartz crystal microbalance with dissipation monitoring, SEM and XRD.

Keywords: in situ GIWAXS, lead iodide, mixed cation, perovskite solar cell, sol-gel process, solvate phase

Procedia PDF Downloads 146
1331 Role of von Willebrand Factor Antigen as Non-Invasive Biomarker for the Prediction of Portal Hypertensive Gastropathy in Patients with Liver Cirrhosis

Authors: Mohamed El Horri, Amine Mouden, Reda Messaoudi, Mohamed Chekkal, Driss Benlaldj, Malika Baghdadi, Lahcene Benmahdi, Fatima Seghier

Abstract:

Background/aim: Recently, the Von Willebrand factor antigen (vWF-Ag)has been identified as a new marker of portal hypertension (PH) and its complications. Few studies talked about its role in the prediction of esophageal varices. VWF-Ag is considered a non-invasive approach, In order to avoid the endoscopic burden, cost, drawbacks, unpleasant and repeated examinations to the patients. In our study, we aimed to evaluate the ability of this marker in the prediction of another complication of portal hypertension, which is portal hypertensive gastropathy (PHG), the one that is diagnosed also by endoscopic tools. Patients and methods: It is about a prospective study, which include 124 cirrhotic patients with no history of bleeding who underwent screening endoscopy for PH-related complications like esophageal varices (EVs) and PHG. Routine biological tests were performed as well as the VWF-Ag testing by both ELFA and Immunoturbidimetric techniques. The diagnostic performance of our marker was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and receiver operating characteristic curves. Results: 124 patients were enrolled in this study, with a mean age of 58 years [CI: 55 – 60 years] and a sex ratio of 1.17. Viral etiologies were found in 50% of patients. Screening endoscopy revealed the presence of PHG in 20.2% of cases, while for EVsthey were found in 83.1% of cases. VWF-Ag levels, were significantly increased in patients with PHG compared to those who have not: 441% [CI: 375 – 506], versus 279% [CI: 253 – 304], respectively (p <0.0001). Using the area under the receiver operating characteristic curve (AUC), vWF-Ag was a good predictor for the presence of PHG. With a value higher than 320% and an AUC of 0.824, VWF-Ag had an 84% sensitivity, 74% specificity, 44.7% positive predictive value, 94.8% negative predictive value, and 75.8% diagnostic accuracy. Conclusion: VWF-Ag is a good non-invasive low coast marker for excluding the presence of PHG in patients with liver cirrhosis. Using this marker as part of a selective screening strategy might reduce the need for endoscopic screening and the coast of the management of these kinds of patients.

Keywords: von willebrand factor, portal hypertensive gastropathy, prediction, liver cirrhosis

Procedia PDF Downloads 198
1330 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 103
1329 Advancing Urban Sustainability through the Integration of Planning Evaluation Methodologies

Authors: Natalie Rosales

Abstract:

Based on an ethical vision which recognizes the vital role of human rights, shared values, social responsibility and justice, and environmental ethics, planning may be interpreted as a process aimed at reducing inequalities and overcoming marginality. Seen from this sustainability perspective, planning evaluation must utilize critical-evaluative and narrative receptive models which assist different stakeholders in their understanding of urban fabric while trigger reflexive processes that catalyze wider transformations. In this paper, this approach servers as a guide for the evaluation of Mexico´s urban planning systems, and postulates a framework to better integrate sustainability notions into planning evaluation. The paper is introduced by an overview of the current debate on evaluation in urban planning. The state of art presented includes: the different perspectives and paradigms of planning evaluation and their fundamentals and scope, which have focused on three main aspects; goal attainment (did planning instruments do what they were supposed to?); performance and effectiveness of planning (retrospective analysis of planning process and policy analysis assessment); and the effects of process-considering decision problems and contexts rather than the techniques and methods. As well as, methodological innovations and improvements in planning evaluation. This comprehensive literature review provides the background to support the authors’ proposal for a set of general principles to evaluate urban planning, grounded on a sustainability perspective. In the second part the description of the shortcomings of the approaches to evaluate urban planning in Mexico set the basis for highlighting the need of regulatory and instrumental– but also explorative- and collaborative approaches. As a response to the inability of these isolated methods to capture planning complexity and strengthen the usefulness of evaluation process to improve the coherence and internal consistency of the planning practice itself. In the third section the general proposal to evaluate planning is described in its main aspects. It presents an innovative methodology for establishing a more holistic and integrated assessment which considers the interdependence between values, levels, roles and methods, and incorporates different stakeholders in the evaluation process. By doing so, this piece of work sheds light on how to advance urban sustainability through the integration of evaluation methodologies into planning.

Keywords: urban planning, evaluation methodologies, urban sustainability, innovative approaches

Procedia PDF Downloads 467
1328 The Effect of a Probiotic Diet on htauE14 in a Rodent Model of Alzheimer’s Disease

Authors: C. Flynn, Q. Yuan, C. Reinhardt

Abstract:

Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder affecting broad areas of the cerebral cortex and hippocampus. More than 95% of AD cases are representative of sporadic AD, where both genetic and environmental risk factors play a role. The main pathological features of AD include the widespread deposition of amyloid-beta and neurofibrillary tau tangles in the brain. The earliest brain pathology related to AD has been defined as hyperphosphorylated soluble tau in the noradrenergic locus coeruleus (LC) neurons, characterized by Braak. However, the cause of this pathology and the ultimate progression of AD is not understood. Increasing research points to a connection between the gut microbiota and the brain, and mounting evidence has shown that there is a bidirectional interaction between the two, known as the gut-brain axis. This axis can allow for bidirectional movement of neuroinflammatory cytokines and pathogenic misfolded proteins, as seen in AD. Prebiotics and probiotics have been shown to have a beneficial effect on gut health and can strengthen the gut-barrier as well as the blood-brain barrier, preventing the spread of these pathogens across the gut-brain axis. Our laboratory has recently established a pretangle tau rat model, in which we selectively express pseudo-phosphorylated human tau (htauE14) in the LC neurons of TH-Cre rats. LC htauE14 produced pathological changes in rats resembling those of the preclinical AD pathology (reduced olfactory discrimination and LC degeneration). In this work, we will investigate the effects of pre/probiotic ingestion on AD behavioral deficits, blood inflammation/cytokines, and various brain markers in our experimental rat model of AD. Rats will be infused with an adeno-associated viral vector containing a human tau gene pseudophosphorylated at 14 sites (common in LC pretangles) into 2-3 month TH-Cre rats. Fecal and blood samples will be taken at pre-surgery, and various post-surgery time points. A collection of behavioral tests will be performed, and immunohistochemistry/western blotting techniques will be used to observe various biomarkers. This work aims to elucidate the relationship between gut health and AD progression by strengthening gut-brain relationship and aims to observe the overall effect on tau formation and tau pathology in AD brains.

Keywords: alzheimer’s disease, aging, gut microbiome, neurodegeneration

Procedia PDF Downloads 134
1327 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 118