Search results for: Wind Energy Conversion Systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16936

Search results for: Wind Energy Conversion Systems

1426 Effect of Anionic Lipid on Zeta Potential Values and Physical Stability of Liposomal Amikacin

Authors: Yulistiani, Muhammad Amin, Fasich

Abstract:

A surface charge of the nanoparticle is a very important consideration in pulmonal drug delivery system. The zeta potential (ZP) is related to the surface charge which can predict stability of nanoparticles as nebules of liposomal amikacin. Anionic lipid such as 1,2-dipalmitoyl-sn-glycero-3-phosphatidylglycerol (DPPG) is expected to contribute to the physical stability of liposomal amikacin and the optimal ZP value. Suitable ZP can improve drug release profiles at specific sites in alveoli as well as their stability in dosage form. This study aimed to analyze the effect of DPPG on ZP values and physical stability of liposomal amikacin. Liposomes were prepared by using the reserved phase evaporation method. Liposomes consisting of DPPG, 1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC), cholesterol and amikacin were formulated in five different compositions 0/150/5/100, 10//150/5/100, 20/150/5/100, 30/150/5/100 and 40/150/5/100 (w/v) respectively. A chloroform/methanol mixture in the ratio of 1 : 1 (v/v) was used as solvent to dissolve lipids. These systems were adjusted in the phosphate buffer at pH 7.4. Nebules of liposomal amikacin were produced by using the vibrating nebulizer and then characterized by the X-ray diffraction, differential scanning calorimetry, particle size and zeta potential analyzer, and scanning electron microscope. Amikacin concentration from liposome leakage was determined by the immunoassay method. The study revealed that presence of DPPG could increase the ZP value. The addition of 10 mg DPPG in the composition resulted in increasing of ZP value to 3.70 mV (negatively charged). The optimum ZP value was reached at -28.780 ± 0.70 mV and particle size of nebules 461.70 ± 21.79 nm. Nebulizing process altered parameters such as particle size, conformation of lipid components and the amount of surface charges of nanoparticles which could influence the ZP value. These parameters might have profound effects on the application of nebules in the alveoli; however, negatively charge nanoparticles were unexpected to have a high ZP value in this system due to increased macrophage uptake and pulmonal clearance. Therefore, the ratio of liposome 20/150/5/100 (w/v) resulted in the most stable colloidal system and might be applicable to pulmonal drug delivery system.

Keywords: anionic lipid, dipalmitoylphosphatidylglycerol, liposomal amikacin, stability, zeta potential

Procedia PDF Downloads 335
1425 Robotics Technology Supported Pedagogic Models in Science, Technology, Engineering, Arts and Mathematics Education

Authors: Sereen Itani

Abstract:

As the world aspires for technological innovation, Innovative Robotics Technology-Supported Pedagogic Models in STEAM Education (Science, Technology, Engineering, Arts, and Mathematics) are critical in our global education system to build and enhance the next generation 21st century skills. Thus, diverse international schools endeavor in attempts to construct an integrated robotics and technology enhanced curriculum based on interdisciplinary subjects. Accordingly, it is vital that the globe remains resilient in STEAM fields by equipping the future learners and educators with Innovative Technology Experiences through robotics to support such fields. A variety of advanced teaching methods is employed to learn about Robotics Technology-integrated pedagogic models. Therefore, it is only when STEAM and innovations in Robotic Technology becomes integrated with real-world applications that transformational learning can occur. Robotics STEAM education implementation faces major challenges globally. Moreover, STEAM skills and concepts are communicated in separation from the real world. Instilling the passion for robotics and STEAM subjects and educators’ preparation could lead to the students’ majoring in such fields by acquiring enough knowledge to make vital contributions to the global STEAM industries. Thus, this necessitates the establishment of Pedagogic models such as Innovative Robotics Technologies to enhance STEAM education and develop students’ 21st-century skills. Moreover, an ICT innovative supported robotics classroom will help educators empower and assess students academically. Globally, the Robotics Design System and platforms are developing in schools and university labs creating a suitable environment for the robotics cross-discipline STEAM learning. Accordingly, the research aims at raising awareness about the importance of robotics design systems and methodologies of effective employment of robotics innovative technology-supported pedagogic models to enhance and develop (STEAM) education globally and enhance the next generation 21st century skills.

Keywords: education, robotics, STEAM (Science, Technology, Engineering, Arts and Mathematics Education), challenges

Procedia PDF Downloads 372
1424 Effect of Maturation on the Characteristics and Physicochemical Properties of Banana and Its Starch

Authors: Chien-Chun Huang, P. W. Yuan

Abstract:

Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals and an important food component throughout the world. The fruit ripening and maturity standards vary from country to country depending on the expected shelf life of market. During ripening there are changes in appearance, texture and chemical composition of banana. The changes of component of banana during ethylene-induced ripening are categorized as nutritive values and commercial utilization. The objectives of this study were to investigate the changes of chemical composition and physicochemical properties of banana during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.

Keywords: maturation of banana, appearance, texture, soluble sugars, resistant starch, enzyme activities, physicochemical properties of banana starch

Procedia PDF Downloads 305
1423 Development of the Food Market of the Republic of Kazakhstan in the Field of Milk Processing

Authors: Gulmira Zhakupova, Tamara Tultabayeva, Aknur Muldasheva, Assem Sagandyk

Abstract:

The development of technology and production of products with increased biological value based on the use of natural food raw materials are important tasks in the policy of the food market of the Republic of Kazakhstan. For Kazakhstan, livestock farming, in particular sheep farming, is the most ancient and developed industry and way of life. The history of the Kazakh people is largely connected with this type of agricultural production, with established traditions using dairy products from sheep's milk. Therefore, the development of new technologies from sheep’s milk remains relevant. In addition, one of the most promising areas for the development of food technology for therapeutic and prophylactic purposes is sheep milk products as a source of protein, immunoglobulins, minerals, vitamins, and other biologically active compounds. This article presents the results of research on the study of milk processing technology. The objective of the study is to study the possibilities of processing sheep milk and its role in human nutrition, as well as the results of research to improve the technology of sheep milk products. The studies were carried out on the basis of sanitary and hygienic requirements for dairy products in accordance with the following test methods. To perform microbiological analysis, we used the method for identifying Salmonella bacteria (Horizontal method for identifying, counting, and serotyping Salmonella) in a certain mass or volume of product. Nutritional value is a complex of properties of food products that meet human physiological needs for energy and basic nutrients. The protein mass fraction was determined by the Kjeldahl method. This method is based on the mineralization of a milk sample with concentrated sulfuric acid in the presence of an oxidizing agent, an inert salt - potassium sulfate, and a catalyst - copper sulfate. In this case, the amino groups of the protein are converted into ammonium sulfate dissolved in sulfuric acid. The vitamin composition was determined by HPLC. To determine the content of mineral substances in the studied samples, the method of atomic absorption spectrophotometry was used. The study identified the technological parameters of sheep milk products and determined the prospects for researching sheep milk products. Microbiological studies were used to determine the safety of the study product. According to the results of the microbiological analysis, no deviations from the norm were identified. This means high safety of the products under study. In terms of nutritional value, the resulting products are high in protein. Data on the positive content of amino acids were also obtained. The results obtained will be used in the food industry and will serve as recommendations for manufacturers.

Keywords: dairy, milk processing, nutrition, colostrum

Procedia PDF Downloads 44
1422 Literature Review on the Barriers to Access Credit for Small Agricultural Producers and Policies to Mitigate Them in Developing Countries

Authors: Margarita Gáfaro, Karelys Guzmán, Paola Poveda

Abstract:

This paper establishes the theoretical aspects that explain the barriers to accessing credit for small agricultural producers in developing countries and identifies successful policy experiences to mitigate them. We will test two hypotheses. The first one is that information asymmetries, high transaction costs and high-risk exposure limit the supply of credit to small agricultural producers in developing countries. The second hypothesis is that low levels of financial education and productivity and high uncertainty about the returns of agricultural activity limit the demand for credit. To test these hypotheses, a review of the theoretical and empirical literature on access to rural credit in developing countries will be carried out. The first part of this review focuses on theoretical models that incorporate information asymmetries in the credit market and analyzes the interaction between these asymmetries and the characteristics of the agricultural sector in developing countries. Some of the characteristics we will focus on are the absence of collateral, the underdevelopment of the judicial systems and insurance markets, and the high dependence on climatic factors of production technologies. The second part of this review focuses on the determinants of credit demand by small agricultural producers, including the profitability of productive projects, security conditions, risk aversion or loss, financial education, and cognitive biases, among others. There are policies that focus on resolving these supply and demand constraints and managing to improve credit access. Therefore, another objective of this paper is to present a review of effective policies that have promoted access to credit for smallholders in the world. For this, information available in policy documents will be collected. This information will be complemented by interviews with officials in charge of the design and execution of these policies in a subset of selected countries. The information collected will be analyzed in light of the conceptual framework proposed in the first two parts of this section. The barriers to access to credit that each policy attempts to resolve and the factors that could explain its effectiveness will be identified.

Keywords: agricultural economics, credit access, smallholder, developing countries

Procedia PDF Downloads 60
1421 Hermitical Landscapes: The Congregation of Saint Paul of Serra De Ossa

Authors: Rolando Volzone

Abstract:

The Congregation of Saint Paul of Serra de Ossa (Ossa Mountain) was founded in 1482, originated by the eremitic movement of the homens da pobre vida (poor life men), which is documented since 1366. The community of hermits expanded up to the first half of the 15th century, mostly in southern Portugal in the Alentejo region. In 1578, following a process of institutionalization led by the Church, an autonomous congregation was set up, affiliated in the Hungarian Order of Saint Paul the First Hermit, until 1834, when the decree of dissolution of the religious orders disbanded all the convents and monasteries in Portugal. The architectural evidences that reached our days as a legacy of the hermitical movement in Serra de Ossa, although studied and analysed from an historical point of view, are still little known with respect to the architectural characteristics of its physical implantation and its relationship with the natural systems. This research intends to expose the appropriation process of the locus eremus as a starting point for the interpretation of this landscape, evidencing the close relationship between the religious experience and the physical space chosen to reach the perfection of the soul. The locus eremus is thus determined not only by practical aspects such as the absolute and relative location, orography, existence of water resources, or the King’s favoring to the religious and settlement action of the hermits, but also by spiritual aspects related to the symbolism of the physical elements present and the solitary walk of these men. These aspects, combined with the built architectural elements and other exerted human action, may be fertile ground for the definition of a hypothetical hermitical landscape based on the sufficiently distinctive characteristics that sustain it. The landscape built by these hermits is established as a cultural and material heritage, and its preservation is of utmost importance. They deeply understood this place and took advantage of its natural resources, manipulating them in an ecological and economically sustainable way, respecting the place, without overcoming its own genius loci but becoming part of it.

Keywords: architecture, congregation of Saint Paul of Serra de Ossa, heremitical landscape, locus eremus

Procedia PDF Downloads 223
1420 The Influence of Ibuprofen, Diclofenac and Naproxen on Composition and Ultrastructural Characteristics of Atriplex patula and Spinacia oleracea

Authors: Ocsana Opris, Ildiko Lung, Maria L. Soran, Alexandra Ciorita, Lucian Copolovici

Abstract:

The effects assessment of environmental stress factors on both crop and wild plants of nutritional value are a very important research topic. Continuously worldwide consumption of drugs leads to significant environmental pollution, thus generating environmental stress. Understanding the effects of the important drugs on plant composition and ultrastructural modification is still limited, especially at environmentally relevant concentrations. The aim of the present work was to investigate the influence of three non-steroidal anti-inflammatory drugs (NSAIDs) on chlorophylls content, carotenoids content, total polyphenols content, antioxidant capacity, and ultrastructure of orache (Atriplex patula L.) and spinach (Spinacia oleracea L.). All green leafy vegetables selected for this study were grown in controlled conditions and treated with solutions of different concentrations (0.1‒1 mg L⁻¹) of diclofenac, ibuprofen, and naproxen. After eight weeks of exposure of the plants to NSAIDs, the chlorophylls and carotenoids content were analyzed by high-performance liquid chromatography coupled with photodiode array and mass spectrometer detectors, total polyphenols and antioxidant capacity by ultraviolet-visible spectroscopy. Also, the ultrastructural analyses of the vegetables were performed using transmission electron microscopy in order to assess the influence of the selected NSAIDs on cellular organisms, mainly photosynthetic organisms (chloroplasts), energy supply organisms (mitochondria) and nucleus as a cellular metabolism coordinator. In comparison with the control plants, decreases in the content of chlorophylls were observed in the case of the Atriplex patula L. plants treated with ibuprofen (11-34%) and naproxen (25-52%). Also, the chlorophylls content from Spinacia oleracea L. was affected, the lowest decrease (34%) being obtained in the case of the treatment with naproxen (1 mg L⁻¹). Diclofenac (1 mg L⁻¹) affected the total polyphenols content (a decrease of 45%) of Atriplex patula L. and ibuprofen (1 mg L⁻¹) affected the total polyphenols content (a decrease of 20%) of Spinacia oleracea L. The results obtained also indicate a moderate reduction of carotenoids and antioxidant capacity in the treated plants, in comparison with the controls. The investigations by transmission electron microscopy demonstrated that the green leafy vegetables were affected by the selected NSAIDs. Thus, this research contributes to a better understanding of the adverse effects of these drugs on studied plants. Important to mention is that the dietary intake of these drugs contaminated plants, plants with important nutritional value, may also presume a risk to human health, but currently little is known about the fate of the drugs in plants and their effect on or risk to the ecosystem.

Keywords: abiotic stress, green leafy vegetables, pigments content, ultra structure

Procedia PDF Downloads 114
1419 Dipeptide Functionalized Nanoporous Anodic Aluminium Oxide Membrane for Capturing Small Molecules

Authors: Abdul Mutalib Md Jani, Abdul Hadi Mahmud, Mohd Tajuddin Mohd Ali

Abstract:

The rapid growth of interest in surface modification of nanostructures materials that exhibit improved structural and functional properties is attracting more researchers. The unique properties of highly ordered nanoporous anodic aluminium oxide (NAAO) membrane have been proposed as a platform for biosensing applications. They exhibit excellent physical and chemical properties with high porosity, high surface area, tunable pore sizes and excellent chemical resistance. In this study, NAAO was functionalized with 3-aminopropyltriethoxysilane (APTES) to prepared silane-modified NAAO. Amine functional groups are formed on the surface of NAAO during silanization and were characterized using Fourier Transform Infrared spectroscopy (FTIR). The synthesis of multi segment of peptide on NAAO surfaces can be realized by changing the surface chemistry of the NAAO membrane via click chemistry. By click reactions, utilizing alkyne terminated with amino group, various peptides tagged on NAAO can be envisioned from chiral natural or unnatural amino acids using standard coupling methods (HOBt, EDCI and HBTU). This strategy seemly versatile since coupling strategy of dipeptide with another amino acids, leading to tripeptide, tetrapeptide or pentapeptide, can be synthesized without purification. When an appropriate terminus is selected, multiple segments of amino acids can be successfully synthesized on the surfaces. The immobilized NAAO should be easily separated from the reaction medium by conventional filtration, thus avoiding complicated purification methods. Herein, we proposed to synthesize multi fragment peptide as a model for capturing and attaching various small biomolecules on NAAO surfaces and can be also applied as biosensing device, drug delivery systems and biocatalyst.

Keywords: nanoporous anodic aluminium oxide, silanization, peptide synthesise, click chemistry

Procedia PDF Downloads 270
1418 Bioremediation of Phenol in Wastewater Using Polymer-Supported Bacteria

Authors: Areej K. Al-Jwaid, Dmitiry Berllio, Andrew Cundy, Irina Savina, Jonathan L. Caplin

Abstract:

Phenol is a toxic compound that is widely distributed in the environment including the atmosphere, water and soil, due to the release of effluents from the petrochemical and pharmaceutical industries, coking plants and oil refineries. Moreover, a range of daily products, using phenol as a raw material, may find their way into the environment without prior treatment. The toxicity of phenol effects both human and environment health, and various physio-chemical methods to remediate phenol contamination have been used. While these techniques are effective, their complexity and high cost had led to search for alternative strategies to reduce and eliminate high concentrations of phenolic compounds in the environment. Biological treatments are preferable because they are environmentally friendly and cheaper than physico-chemical approaches. Some microorganisms such as Pseudomonas sp., Rhodococus sp., Acinetobacter sp. and Bacillus sp. have shown a high ability to degrade phenolic compounds to provide a sole source of energy. Immobilisation process utilising various materials have been used to protect and enhance the viability of cells, and to provide structural support for the bacterial cells. The aim of this study is to develop a new approach to the bioremediation of phenol based on an immobilisation strategy that can be used in wastewater. In this study, two bacterial species known to be phenol degrading bacteria (Pseudomonas mendocina and Rhodococus koreensis) were purchased from National Collection of Industrial, Food and Marine Bacteria (NCIMB). The two species and mixture of them were immobilised to produce macro porous crosslinked cell cryogels samples by using four types of cross-linker polymer solutions in a cryogelation process. The samples were used in a batch culture to degrade phenol at an initial concentration of 50mg/L at pH 7.5±0.3 and a temperature of 30°C. The four types of polymer solution - i. glutaraldehyde (GA), ii. Polyvinyl alcohol with glutaraldehyde (PVA+GA), iii. Polyvinyl alcohol–aldehyde (PVA-al) and iv. Polyetheleneimine–aldehyde (PEI-al), were used at different concentrations, ranging from 0.5 to 1.5% to crosslink the cells. The results of SEM and rheology analysis indicated that cell-cryogel samples crosslinked with the four cross-linker polymers formed monolithic macro porous cryogels. The samples were evaluated for their ability to degrade phenol. Macro porous cell–cryogels crosslinked with GA and PVA+GA showed an ability to degrade phenol for only one week, while the other samples crosslinked with a combination of PVA-al + PEI-al at two different concentrations have shown higher stability and viability to reuse to degrade phenol at concentration (50 mg/L) for five weeks. The initial results of using crosslinked cell cryogel samples to degrade phenol indicate that is a promising tool for bioremediation strategies especially to eliminate and remove the high concentration of phenol in wastewater.

Keywords: bioremediation, crosslinked cells, immobilisation, phenol degradation

Procedia PDF Downloads 220
1417 Ecological Evaluation and Conservation Strategies of Economically Important Plants in Indian Arid Zone

Authors: Sher Mohammed, Purushottam Lal, Pawan K. Kasera

Abstract:

The Thar Desert of Rajasthan covers a wide geographical area spreading between 23.3° to 30.12°, North latitude and 69.3◦ to 76◦ Eastern latitudes; having a unique spectrum of arid zone vegetation. This desert is spreading over 12 districts having a rich source of economically important/threatened plant diversity interacting and growing with adverse climatic conditions of the area. Due to variable geological, physiographic, climatic, edaphic and biotic factors, the arid zone medicinal flora exhibit a wide collection of angiosperm families. The herbal diversity of this arid region is medicinally important in household remedies among tribal communities as well as in traditional systems. The on-going increasing disturbances in natural ecosystems are due to climatic and biological, including anthropogenic factors. The unique flora and subsequently dependent faunal diversity of the desert ecosystem is losing its biotic potential. A large number of plants have no future unless immediate steps are taken to arrest the causes, leading to their biological improvement. At present the potential loss in ecological amplitude of various genera and species is making several plant species as red listed plants of arid zone vegetation such as Commmiphora wightii, Tribulus rajasthanensis, Calligonum polygonoides, Ephedra foliata, Leptadenia reticulata, Tecomella undulata, Blepharis sindica, Peganum harmala, Sarcostoma vinimale, etc. Mostly arid zone species are under serious pressure against prevailing ecosystem factors to continuation their life cycles. Genetic, molecular, cytological, biochemical, metabolic, reproductive, germination etc. are the several points where the floral diversity of the arid zone area is facing severe ecological influences. So, there is an urgent need to conserve them. There are several opportunities in the field to carry out remarkable work at particular levels to protect the native plants in their natural habitat instead of only their in vitro multiplication.

Keywords: ecology, evaluation, xerophytes, economically, threatened plants, conservation

Procedia PDF Downloads 257
1416 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 148
1415 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 30
1414 A Systematic Review on Orphan Drugs Pricing, and Prices Challenges

Authors: Seyran Naghdi

Abstract:

Background: Orphan drug development is limited by very high costs attributed to the research and development and small size market. How health policymakers address this challenge to consider both supply and demand sides need to be explored for directing the policies and plans in the right way. The price is an important signal for pharmaceutical companies’ profitability and the patients’ accessibility as well. Objective: This study aims to find out the orphan drugs' price-setting patterns and approaches in health systems through a systematic review of the available evidence. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) approach was used. MedLine, Embase, and Web of Sciences were searched via appropriate search strategies. Through Medical Subject Headings (MeSH), the appropriate terms for pricing were 'cost and cost analysis', and it was 'orphan drug production', and 'orphan drug', for orphan drugs. The critical appraisal was performed by the Joanna-Briggs tool. A Cochrane data extraction form was used to obtain the data about the studies' characteristics, results, and conclusions. Results: Totally, 1,197 records were found. It included 640 hits from Embase, 327 from Web of Sciences, and 230 MedLine. After removing the duplicates, 1,056 studies remained. Of them, 924 studies were removed in the primary screening phase. Of them, 26 studies were included for data extraction. The majority of the studies (>75%) are from developed countries, among them, approximately 80% of the studies are from European countries. Approximately 85% of evidence has been produced in the recent decade. Conclusions: There is a huge variation of price-setting among countries, and this is related to the specific pharmacological market structure and the thresholds that governments want to intervene in the process of pricing. On the other hand, there is some evidence on the availability of spaces to reduce the very high costs of orphan drugs development through an early agreement between pharmacological firms and governments. Further studies need to focus on how the governments could incentivize the companies to agree on providing the drugs at lower prices.

Keywords: orphan drugs, orphan drug production, pricing, costs, cost analysis

Procedia PDF Downloads 159
1413 Nyaya, Buddhist School Controversy regarding the Laksana of Pratyaksa: Causal versus Conceptual Analysis

Authors: Maitreyee Datta

Abstract:

Buddhist lakṣaņa of pratyakṣa pramā is not the result of the causal analysis of the genesis of it. Naiyāyikas, on the other hand, has provided the lakṣaņa of pratyakṣa in terms of the causal analysis of it. Thus, though in these two philosophical systems philosophers have discussed in detail the nature of pratyakṣa pramā (perception), yet their treatments and understanding of it vary according to their respective understanding of pramā and prmāņa and their relationship. In Nyāya school, the definition (lakṣņa) of perception (pratyakṣa) has been given in terms of the process by virtue of which it has been generated. Thus, Naiyāyikas were found to provide a causal account of perception (pratyakṣa) by virtue of their lakṣaņa of it. But in Buddhist epistemology perception has been defined by virtue of the nature of perceptual knowledge (pratyakṣa pramā) which is devoid of any vikalpa or cognition. These two schools differed due to their different metaphysical presuppositions which determine their epistemological pursuits. The Naiyāyikas admitted pramā and pramāņa as separate events and they have taken pramāņa to be the cause of pramā. These presuppositions enabled them to provide a lakṣaņa of pratyakṣa pramā in terms of the causes by which it is generated. Why did the Buddhist epistemologists define perception by the unique nature of perceptual knowledge instead of the process by which it is generated? This question will be addressed and dealt with in the present paper. In doing so, the unique purpose of Buddhist philosophy will be identified which will enable us to find out an answer to the above question. This enterprise will also reveal the close relationship among some basic Buddhist presuppositions like pratityasamutpādavāda and kṣaņikavāda with Buddhist epistemological positions. In other words, their distinctive notion of pramā (knowledge) indicates their unique epistemological position which is found to comply with their basic philosophical presuppositions. The first section of the paper will present the Buddhist epistemologists’ lakṣaņa of pratyakṣa. The analysis of the lakṣaņa will be given in clear terms to reveal the nature of pratyakṣa as an instance of pramā. In the second section, an effort will be made to identify the uniqueness of such a definition. Here an articulation will be made in which the relationship among basic Buddhist presuppositions and their unique epistemological positions are determined. In the third section of the paper, an effort will be made to compare Nyāya epistemologist’s position regarding pratyakṣa with that of the Buddhist epistemologist.

Keywords: laksana, prama, pramana, pratyksa

Procedia PDF Downloads 138
1412 Modeling of Alpha-Particles’ Epigenetic Effects in Short-Term Test on Drosophila melanogaster

Authors: Z. M. Biyasheva, M. Zh. Tleubergenova, Y. A. Zaripova, A. L. Shakirov, V. V. Dyachkov

Abstract:

In recent years, interest in ecogenetic and biomedical problems related to the effects on the population of radon and its daughter decay products has increased significantly. Of particular interest is the assessment of the consequence of irradiation at hazardous radon areas, which includes the Almaty region due to the large number of tectonic faults that enhance radon emanation. In connection with the foregoing, the purpose of this work was to study the genetic effects of exposure to supernormal radon doses on the alpha-radiation model. Irradiation does not affect the growth of the cell, but rather its ability to differentiate. In addition, irradiation can lead to somatic mutations, morphoses and modifications. These damages most likely occur from changes in the composition of the substances of the cell. Such changes are epigenetic since they affect the regulatory processes of ontogenesis. Variability in the expression of regulatory genes refers to conditional mutations that modify the formation of signs of intraspecific similarity. Characteristic features of these conditional mutations are the dominant type of their manifestation, phenotypic asymmetry and their instability in the generations. Currently, the terms “morphosis” and “modification” are used to describe epigenetic variability, which are maintained in Drosophila melanogaster cultures using linkaged X- chromosomes, and the mutant X-chromosome is transmitted along the paternal line. In this paper, we investigated the epigenetic effects of alpha particles, whose source in nature is mainly radon and its daughter decay products. In the experiment, an isotope of plutonium-238 (Pu238), generating radiation with an energy of about 5500 eV, was used as a source of alpha particles. In an experiment in the first generation (F1), deformities or morphoses were found, which can be called "radiation syndromes" or mutations, the manifestation of which is similar to the pleiotropic action of genes. The proportion of morphoses in the experiment was 1.8%, and in control 0.4%. In this experiment, the morphoses in the flies of the first and second generation looked like black spots, or melanomas on different parts of the imago body; "generalized" melanomas; curled, curved wings; shortened wing; bubble on one wing; absence of one wing, deformation of thorax, interruption and violation of tergite patterns, disruption of distribution of ocular facets and bristles; absence of pigmentation of the second and third legs. Statistical analysis by the Chi-square method showed the reliability of the difference in experiment and control at P ≤ 0.01. On the basis of this, it can be considered that alpha particles, which in the environment are mainly generated by radon and its isotopes, have a mutagenic effect that manifests itself, mainly in the formation of morphoses or deformities.

Keywords: alpha-radiation, genotoxicity, morphoses, radioecology, radon

Procedia PDF Downloads 145
1411 Acceptability of ‘Fish Surimi Peptide’ in Under Five Children Suffering from Moderate Acute Malnutrition in Bangladesh

Authors: M. Iqbal Hossain, Azharul Islam Khan, S. M. Rafiqul Islam, Tahmeed Ahmed

Abstract:

Objective: Moderate acute malnutrition (MAM) is a major cause of morbidity and mortality in under-5 children of low-income countries. Approximately 14.6% of all under-5 mortality worldwide is attributed to MAM with >3 times increased risk of death compared to well-nourished peers. Prevalence of MAM among under-5 children in Bangladesh is ~12% (~1.7 million). Providing a diet containing adequate nutrients is the mainstay of treatment of children with MAM. It is now possible to process fish into fish peptides with longer shelf-life without refrigerator, known as ‘Fish Surimi peptide’ and this could be an attractive alternative to supply fish protein in the diet of children in low-income countries like Bangladesh. We conducted this study to assess the acceptability of Fish Surimi peptide given with various foods/meals in 2-5 years old children with MAM. Design/methods: Fish Surimi peptide is broken down from white fish meat using plant-derived enzyme and the ingredient is just fish meat consisted of 20 different kinds of amino acids including nine essential amino acids. In a convenience sample of 34 children we completed the study ward of Dhaka Hospital of icddr,b in Bangladesh during November 2014 through February 2015. For each child the study was for two consecutive days: i.e. direct observation of food intake of two lunches and two suppers. In a randomly and blinded manner and cross over design an individual child received Fish Surimi peptide (5g at lunch and 5g at supper) mixed meal [e.g. 30g rice and 30g dahl (thick lentil soup) or 60g of a vegetables-lentil-rice mixed local dish known as khichuri in one day and the same meal on other day without any Fish Surimi peptide. We observed the completeness and eagerness of eating and any possible side effect (e.g. allergy, vomiting, diarrhea etc.) over these two days. Results: The mean±SD age of the enrolled children was 38.4±9.4 months, weight 11.22±1.41 kg, height 91.0±6.3 cm, and WHZ was -2.13±0.76. Their mean±SD total feeding time (minutes) for lunch was 25.4±13.6 vs. 20.6±11.1 (p=0.130) and supper was 22.3±9.7 vs. 19.7±11.2 (p=0.297), and total amount (g) of food eaten in lunch and supper was found similar 116.1±7.0 vs. 117.7±8.0 (p=3.01) in A (Fish Surimi) and B group respectively. Score in Hedonic scale by mother on test of food given to children at lunch or supper was 3.9±0.2 vs. 4.0±0.2 (p=0.317) and on overall acceptance (including the texture, smell, and appearance) of food at lunch or supper was 3.9±0.2 vs. 4.0±0.2 (p=0.317) for A and B group respectively. No adverse event was observed in any food group during the study period. Conclusions: Fish Surimi peptide may be a cost effective supplementary food, which should be tested by appropriately designed randomized community level intervention trial both in wasted children and stunted children.

Keywords: protein-energy malnutrition, moderate acute malnutrition, weight-for-height z-score, mid upper arm circumference, acceptability, fish surimi peptide, under-5 children

Procedia PDF Downloads 394
1410 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 103
1409 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 126
1408 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots

Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha

Abstract:

Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.

Keywords: biosensor, dopamine, fluorescence, quantum dots

Procedia PDF Downloads 357
1407 Dermatomyositis: It is Not Always an Allergic Reaction

Authors: Irfan Abdulrahman Sheth, Sohil Pothiawala

Abstract:

Dermatomyositis is an idiopathic inflammatory myopathy, traditionally characterized by a progressive, symmetrical proximal muscle weakness and pathognomonic or characteristic cutaneous manifestations. We report a case of a 60-year old Chinese female who was referred from polyclinic for allergic rash over the body after applying hair dye 3 weeks ago. It was associated with puffiness of face, shortness of breath and hoarse voice since last 2 weeks with decrease effort tolerance. She also complained of dysphagia/ myalgia with progressive weakness of proximal muscles and palpitations. She denied chest pain, loss of appetite, weight loss, orthopnea or fever. She had stable vital signs and appeared cushingoid. She was noted to have rash over the scalp/ face and ecchymosis over the right arm with puffiness of face and periorbital oedema. There was symmetrical muscle weakness and other neurological examination was normal. Initial impression was of allergic reaction and underlying nephrotic syndrome and Cushing’s syndrome from TCM use. Diagnostic tests showed high Creatinine kinase (CK) of 1463 u/l, CK–MB of 18.7 ug/l and Troponin –T of 0.09 ug/l. The Full blood count and renal panel was normal. EMG showed inflammatory myositis. Patient was managed by rheumatologist and discharged on oral prednisolone with methotrexate/ ergocalciferol capsule and calcium carb, vitamin D tablets and outpatient follow up. In some patients, cutaneous disease exists in the absence of objective evidence of muscle inflammation. Management of dermatomyositis begins with careful investigation for the presence of muscle disease or of additional systemic involvement, particularly of the pulmonary, cardiac or gastrointestinal systems, and for the possibility of an accompanying malignancy. Muscle disease and systemic involvement can be refractory and may require multiple sequential therapeutic interventions or, at times, combinations of therapies. Thus, we want to highlight to the physicians that the cutaneous disease of dermatomyositis should not be confused with allergic reaction. It can be particularly challenging to diagnose. Early recognition aids appropriate management of this group of patients.

Keywords: dermatomyositis, myopathy, allergy, cutaneous disease

Procedia PDF Downloads 325
1406 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 234
1405 Conjunctive Management of Surface and Groundwater Resources under Uncertainty: A Retrospective Optimization Approach

Authors: Julius M. Ndambuki, Gislar E. Kifanyi, Samuel N. Odai, Charles Gyamfi

Abstract:

Conjunctive management of surface and groundwater resources is a challenging task due to the spatial and temporal variability nature of hydrology as well as hydrogeology of the water storage systems. Surface water-groundwater hydrogeology is highly uncertain; thus it is imperative that this uncertainty is explicitly accounted for, when managing water resources. Various methodologies have been developed and applied by researchers in an attempt to account for the uncertainty. For example, simulation-optimization models are often used for conjunctive water resources management. However, direct application of such an approach in which all realizations are considered at each iteration of the optimization process leads to a very expensive optimization in terms of computational time, particularly when the number of realizations is large. The aim of this paper, therefore, is to introduce and apply an efficient approach referred to as Retrospective Optimization Approximation (ROA) that can be used for optimizing conjunctive use of surface water and groundwater over a multiple hydrogeological model simulations. This work is based on stochastic simulation-optimization framework using a recently emerged technique of sample average approximation (SAA) which is a sampling based method implemented within the Retrospective Optimization Approximation (ROA) approach. The ROA approach solves and evaluates a sequence of generated optimization sub-problems in an increasing number of realizations (sample size). Response matrix technique was used for linking simulation model with optimization procedure. The k-means clustering sampling technique was used to map the realizations. The methodology is demonstrated through the application to a hypothetical example. In the example, the optimization sub-problems generated were solved and analysed using “Active-Set” core optimizer implemented under MATLAB 2014a environment. Through k-means clustering sampling technique, the ROA – Active Set procedure was able to arrive at a (nearly) converged maximum expected total optimal conjunctive water use withdrawal rate within a relatively few number of iterations (6 to 7 iterations). Results indicate that the ROA approach is a promising technique for optimizing conjunctive water use of surface water and groundwater withdrawal rates under hydrogeological uncertainty.

Keywords: conjunctive water management, retrospective optimization approximation approach, sample average approximation, uncertainty

Procedia PDF Downloads 226
1404 Hindrances to Effective Delivery of Infrastructural Development Projects in Nigeria’s Built Environment

Authors: Salisu Gidado Dalibi, Sadiq Gumi Abubakar, JingChun Feng

Abstract:

Nigeria’s population is about 190 million and is on the increase annually making it the seventh most populated nation in the world and first in Africa. This population growth comes with its prospects, needs, and challenges especially on the existing and future infrastructure. Infrastructure refers to structures, systems, and facilities serving the economy of a country, city, town, businesses, industries, etc. These include roads, railways lines, bridges, tunnels, ports, stadiums, dams and water projects, power generation plants and distribution grids, information, and communication technology (ICT), etc. The Nigerian government embarked on several infrastructural development projects (IDPs) to address the deficit as the present infrastructure cannot cater to the needs nor sustain the country. However, delivering such IDPs have not been smooth; comes with challenges from within and outside the project; frequent delays and abandonment. Thus, affecting all the stakeholders involved. Hence, the aim of this paper is to identify and assess the factors that are hindering the effective delivery of IDPs in Nigeria’s built environment with the view to offer more insight into such factors, and ways to address them. The methodology adopted in this study involves the use of secondary sources of data from several materials (official publications, journals, newspapers, internet, etc.) were reviewed within the IDPs field by laying more emphasis on Nigeria’s cases. The hindrance factors in this regard were identified which forms the backbone of the questionnaire. A pilot survey was used to test its suitability; after which it was randomly administered to various project professionals in Nigeria’s construction industry using a 5-point Likert scale format to ascertain the impact of these hindrances. Cronbach’s Alpha reliability test, mean item score computations, relative importance indices, T-test, Chi-Square statistics were used for data analyses. The results outline the impact of various internal, external and project related factors that are hindering IDPs within Nigeria’s built environment.

Keywords: built environment, development, factors, hindrances, infrastructure, Nigeria, project

Procedia PDF Downloads 160
1403 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 66
1402 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 59
1401 Advancing Urban Sustainability through the Integration of Planning Evaluation Methodologies

Authors: Natalie Rosales

Abstract:

Based on an ethical vision which recognizes the vital role of human rights, shared values, social responsibility and justice, and environmental ethics, planning may be interpreted as a process aimed at reducing inequalities and overcoming marginality. Seen from this sustainability perspective, planning evaluation must utilize critical-evaluative and narrative receptive models which assist different stakeholders in their understanding of urban fabric while trigger reflexive processes that catalyze wider transformations. In this paper, this approach servers as a guide for the evaluation of Mexico´s urban planning systems, and postulates a framework to better integrate sustainability notions into planning evaluation. The paper is introduced by an overview of the current debate on evaluation in urban planning. The state of art presented includes: the different perspectives and paradigms of planning evaluation and their fundamentals and scope, which have focused on three main aspects; goal attainment (did planning instruments do what they were supposed to?); performance and effectiveness of planning (retrospective analysis of planning process and policy analysis assessment); and the effects of process-considering decision problems and contexts rather than the techniques and methods. As well as, methodological innovations and improvements in planning evaluation. This comprehensive literature review provides the background to support the authors’ proposal for a set of general principles to evaluate urban planning, grounded on a sustainability perspective. In the second part the description of the shortcomings of the approaches to evaluate urban planning in Mexico set the basis for highlighting the need of regulatory and instrumental– but also explorative- and collaborative approaches. As a response to the inability of these isolated methods to capture planning complexity and strengthen the usefulness of evaluation process to improve the coherence and internal consistency of the planning practice itself. In the third section the general proposal to evaluate planning is described in its main aspects. It presents an innovative methodology for establishing a more holistic and integrated assessment which considers the interdependence between values, levels, roles and methods, and incorporates different stakeholders in the evaluation process. By doing so, this piece of work sheds light on how to advance urban sustainability through the integration of evaluation methodologies into planning.

Keywords: urban planning, evaluation methodologies, urban sustainability, innovative approaches

Procedia PDF Downloads 463
1400 Nanopack: A Nanotechnology-Based Antimicrobial Packaging Solution for Extension of Shelf Life and Food Safety

Authors: Andy Sand, Naama Massad – Ivanir, Nadav Nitzan, Elisa Valderrama, Alfred Wegenberger, Koranit Shlosman, Rotem Shemesh, Ester Segal

Abstract:

Microbial spoilage of food products is of great concern in the food industry due to the direct impact on the shelf life of foods and the risk of foodborne illness. Therefore, food packaging may serve as a crucial contribution to keep the food fresh and suitable for consumption. Active packaging solutions that have the ability to inhibit the development of microorganism in food products attract a lot of interest, and many efforts have been made to engineer and assimilate such solutions on various food products. NanoPack is an EU-funded international project aiming to develop state-of-the-art antimicrobial packaging systems for perishable foods. The project is based on natural essential oils which possess significant antimicrobial activity against many bacteria, yeasts and molds. The essential oils are encapsulated in natural aluminosilicate clays, halloysite nanotubes (HNT's), that serves as a carrier for the volatile essential oils and enable their incorporation into polymer films. During the course of the project, several polyethylene films with diverse essential oils combinations were designed based on the characteristics of their target food products. The antimicrobial activity of the produced films was examined in vitro on a broad spectrum of microorganisms including gram-positive and gram-negative bacteria, aerobic and anaerobic bacteria, yeasts and molds. The films that showed promising in vitro results were successfully assimilated on in vivo active packaging of several food products such as cheese, bread, fruits and raw meat. The results of the in vivo analyses showed significant inhibition of the microbial spoilage, indicating the strong contribution of the NanoPack packaging solutions on the extension of shelf life and reduction of food waste caused by early spoilage throughout the supply chain.

Keywords: food safety, food packaging, essential oils, nanotechnology

Procedia PDF Downloads 126
1399 Role of Autophagic Lysosome Reformation for Cell Viability in an in vitro Infection Model

Authors: Muhammad Awais Afzal, Lorena Tuchscherr De Hauschopp, Christian Hübner

Abstract:

Introduction: Autophagy is an evolutionarily conserved lysosome-dependent degradation pathway, which can be induced by extrinsic and intrinsic stressors in living systems to adapt to fluctuating environmental conditions. In the context of inflammatory stress, autophagy contributes to the elimination of invading pathogens, the regulation of innate and adaptive immune mechanisms, and regulation of inflammasome activity as well as tissue damage repair. Lysosomes can be recycled from autolysosomes by the process of autophagic lysosome reformation (ALR), which depends on the presence of several proteins including Spatacsin. Thus ALR contributes to the replenishment of lysosomes that are available for fusion with autophagosomes in situations of increased autophagic turnover, e.g., during bacterial infections, inflammatory stress or sepsis. Objectives: We aimed to assess whether ALR plays a role for cell survival in an in-vitro bacterial infection model. Methods: Mouse embryonic fibroblasts (MEFs) were isolated from wild-type mice and Spatacsin (Spg11-/-) knockout mice. Wild-type MEFs and Spg11-/- MEFs were infected with Staphylococcus aureus (multiplication of infection (MOI) used was 10). After 8 and 16 hours of infection, cell viability was assessed on BD flow cytometer through propidium iodide intake. Bacterial intake by cells was also calculated by plating cell lysates on blood agar plates. Results: in-vitro infection of MEFs with Staphylococcus aureus showed a marked decrease of cell viability in ALR deficient Spatacsin knockout (Spg11-/-) MEFs after 16 hours of infection as compared to wild-type MEFs (n=3 independent experiments; p < 0.0001) although no difference was observed for bacterial intake by both genotypes. Conclusion: Suggesting that ALR is important for the defense of invading pathogens e.g. S. aureus, we observed a marked increase of cell death in an in-vitro infection model in cells with compromised ALR.

Keywords: autophagy, autophagic lysosome reformation, bacterial infections, Staphylococcus aureus

Procedia PDF Downloads 133
1398 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 124
1397 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 313