Search results for: pore-throat distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5091

Search results for: pore-throat distribution

921 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions

Authors: Ozan Kahraman, Hao Feng

Abstract:

Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.

Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization

Procedia PDF Downloads 180
920 Discriminating Between Energy Drinks and Sports Drinks Based on Their Chemical Properties Using Chemometric Methods

Authors: Robert Cazar, Nathaly Maza

Abstract:

Energy drinks and sports drinks are quite popular among young adults and teenagers worldwide. Some concerns regarding their health effects – particularly those of the energy drinks - have been raised based on scientific findings. Differentiating between these two types of drinks by means of their chemical properties seems to be an instructive task. Chemometrics provides the most appropriate strategy to do so. In this study, a discrimination analysis of the energy and sports drinks has been carried out applying chemometric methods. A set of eleven samples of available commercial brands of drinks – seven energy drinks and four sports drinks – were collected. Each sample was characterized by eight chemical variables (carbohydrates, energy, sugar, sodium, pH, degrees Brix, density, and citric acid). The data set was standardized and examined by exploratory chemometric techniques such as clustering and principal component analysis. As a preliminary step, a variable selection was carried out by inspecting the variable correlation matrix. It was detected that some variables are redundant, so they can be safely removed, leaving only five variables that are sufficient for this analysis. They are sugar, sodium, pH, density, and citric acid. Then, a hierarchical clustering `employing the average – linkage criterion and using the Euclidian distance metrics was performed. It perfectly separates the two types of drinks since the resultant dendogram, cut at the 25% similarity level, assorts the samples in two well defined groups, one of them containing the energy drinks and the other one the sports drinks. Further assurance of the complete discrimination is provided by the principal component analysis. The projection of the data set on the first two principal components – which retain the 71% of the data information – permits to visualize the distribution of the samples in the two groups identified in the clustering stage. Since the first principal component is the discriminating one, the inspection of its loadings consents to characterize such groups. The energy drinks group possesses medium to high values of density, citric acid, and sugar. The sports drinks group, on the other hand, exhibits low values of those variables. In conclusion, the application of chemometric methods on a data set that features some chemical properties of a number of energy and sports drinks provides an accurate, dependable way to discriminate between these two types of beverages.

Keywords: chemometrics, clustering, energy drinks, principal component analysis, sports drinks

Procedia PDF Downloads 109
919 An International Comparison of Global Financial Centers: Major Competitive Strategies

Authors: I. Hakki Eraslan, Birol Ozturk, Istemi Comlekci

Abstract:

This paper begins by defining what is meant by “globalization” in finance and by identifying the sources of value-added in the internationally-competitive financial services sector origination, trading and distribution of debt and equity capital market instruments and their derivatives, foreign exchange trading and securities brokerage, management of market risk and credit risk, loan syndication and structured bank financings, corporate finance and advisory services, and asset management. These activities are considered in terms of a “value-chain” one that ultimately gives rise to the real economic gains attributable to financial-center operations. The research presents available evidence as to where the relevant value-added activities usually take place. It then examines the “centrifugal” and “centripetal” forces that determine the concentration or dispersal of value-added activity in financial intermediation, both interregionally and internationally. Next, the research assesses the factors, which appear to underlie the locational pattern of international financial centers that has evolved. In preparing this paper, also it is examined the current position and the main opportunities and challenges facing world major financial services sector, and attempted to lay out a potential vision and strategies. It is conducted extensive research, including many internal research materials and publications. It is also engaged closely with the academia, industry practitioners and regulators, and consulted market experts from major world financial centers. More than 60 in‐depth consultative sessions were conducted in the past two years which provided insightful suggestions and innovative ideas on how to further financial industry’s position as an international financial centre. The paper concludes with the outlook for the future pattern of financial centers in the global competitive environment. The ideas and advice gathered are condensed into this paper that recommends to the strategic decision leaders a vision and a strategy for financial services sector to move forward amid a highly competitive environment.

Keywords: financial centers, competitiveness, financial services industry, economics

Procedia PDF Downloads 404
918 Content and Language Integrated Instruction: An Investigation of Oral Corrective Feedback in the Chinese Immersion Classroom

Authors: Qin Yao

Abstract:

Content and language integrated instruction provides second language learners instruction in subject matter and language, and is greatly valued, particularly in the language immersion classroom where a language other than students’ first language is the vehicle for teaching school curriculum. Corrective feedback is an essential instructional technique for teachers to integrate a focus on language into their content instruction. This study aims to fill a gap in the literature on immersion—the lack of studies examining corrective feedback in Chinese immersion classrooms, by studying learning opportunities brought by oral corrective feedback in a Chinese immersion classroom. Specifically, it examines what is the distribution of different types of teacher corrective feedback and how students respond to each feedback type, as well as how the focus of the teacher-student interactional exchanges affect the effect of feedback. Two Chinese immersion teachers and their immersion classes were involved, and data were collected through classroom observations interviews. Observations document teachers’ provision of oral corrective feedback and students’ responses following the feedback in class, and interviews with teachers collected teachers’ reflective thoughts about their teaching. A primary quantitative and qualitative analysis of the data revealed that, among different types of corrective feedback, recast occurred most frequently. Metalinguistic clue and repetition were the least occurring feedback types. Clarification request lead to highest percentage of learner uptake manifested by learners’ oral production immediately following the feedback, while explicit correction came the second and recast the third. In addition, the results also showed the interactional context played a role in the effectiveness of the feedback: teachers were most likely to give feedback in conversational exchanges that focused on explicit language and content, while students were most likely to use feedback in exchanges that focused on explicit language. In conclusion, the results of this study indicate recasts are preferred by Chinese immersion teachers, confirming results of previous studies on corrective feedback in non-Chinese immersion classrooms; and clarification request and explicit language instruction elicit more target language production from students and are facilitative in their target language development, thus should not be overlooked in immersion and other content and language integrated classrooms.

Keywords: Chinese immersion, content and language integrated instruction, corrective feedback, interaction

Procedia PDF Downloads 411
917 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 107
916 Balancing Electricity Demand and Supply to Protect a Company from Load Shedding: A Review

Authors: G. W. Greubel, A. Kalam

Abstract:

This paper provides a review of the technical problems facing the South African electricity system and discusses a hypothetical ‘virtual grid’ concept that may assist in solving the problems. The proposed solution has potential application across emerging markets with constrained power infrastructure or for companies who wish to be entirely powered by renewable energy. South Africa finds itself at a confluence of forces where the national electricity supply system is constrained with under-supply primarily from old and failing coal-fired power stations and congested and inadequate transmission and distribution systems. Simultaneously, the country attempts to meet carbon reduction targets driven by both an alignment with international goals and a consumer-driven requirement. The constrained electricity system is an aspect of an economy characterized by very low economic growth, high unemployment, and frequent and significant load shedding. The fiscus does not have the funding to build new generation capacity or strengthen the grid. The under-supply is increasingly alleviated by the penetration of wind and solar generation capacity and embedded roof-top solar. However, this increased penetration results in less inertia, less synchronous generation, and less capability for fast frequency response, with resultant instability. The renewable energy facilities assist in solving the under-supply issues but merely ‘kick the can down the road’ by not contributing to grid stability or by substituting the lost inertia, thus creating an expanding issue for the grid to manage. By technically balancing its electricity demand and supply a company with facilities located across the country can be protected from the effects of load shedding, and thus ensure financial and production performance, protect jobs, and contribute meaningfully to the economy. By treating the company’s load (across the country) and its various distributed generation facilities as a ‘virtual grid’, which by design will provide ancillary services to the grid one is able to create a win-win situation for both the company and the grid.

Keywords: load shedding, renewable energy integration, smart grid, virtual grid, virtual power plant

Procedia PDF Downloads 59
915 Leveraging Remote Sensing Information for Drought Disaster Risk Management

Authors: Israel Ropo Orimoloye, Johanes A. Belle, Olusola Adeyemi, Olusola O. Ololade

Abstract:

With more than 100,000 orbits during the past 20 years, Terra has significantly improved our knowledge of the Earth's climate and its implications on societies and ecosystems of human activity and natural disasters, including drought events. With Terra instrument's performance and the free distribution of its products, this study utilised Terra MOD13Q1 satellite data to assess drought disaster events and its spatiotemporal patterns over the Free State Province of South Africa between 2001 and 2019 for summer, autumn, winter, and spring seasons. The study also used high-resolution downscaled climate change projections under three representative concentration pathways (RCP). Three future periods comprising the short (the 2030s), medium (2040s), and long term (2050s) compared to the current period are analysed to understand the potential magnitude of projected climate change-related drought. The study revealed that the year 2001 and 2016 witnessed extreme drought conditions where the drought index is between 0 and 20% across the entire province during summer, while the year 2003, 2004, 2007, and 2015 observed severe drought conditions across the region with variation from one part to the another. The result shows that from -24.5 to -25.5 latitude, the area witnessed a decrease in precipitation (80 to 120mm) across the time slice and an increase in the latitude -26° to -28° S for summer seasons, which is more prominent in the year 2041 to 2050. This study emphasizes the strong spatio-environmental impacts within the province and highlights the associated factors that characterise high drought stress risk, especially on the environment and ecosystems. This study contributes to a disaster risk framework to identify areas for specific research and adaptation activities on drought disaster risk and for environmental planning in the study area, which is characterised by both rural and urban contexts, to address climate change-related drought impacts.

Keywords: remote sensing, drought disaster, climate scenario, assessment

Procedia PDF Downloads 187
914 Construction of Ovarian Cancer-on-Chip Model by 3D Bioprinting and Microfluidic Techniques

Authors: Zakaria Baka, Halima Alem

Abstract:

Cancer is a major worldwide health problem that has caused around ten million deaths in 2020. In addition, efforts to develop new anti-cancer drugs still face a high failure rate. This is partly due to the lack of preclinical models that recapitulate in-vivo drug responses. Indeed conventional cell culture approach (known as 2D cell culture) is far from reproducing the complex, dynamic and three-dimensional environment of tumors. To set up more in-vivo-like cancer models, 3D bioprinting seems to be a promising technology due to its ability to achieve 3D scaffolds containing different cell types with controlled distribution and precise architecture. Moreover, the introduction of microfluidic technology makes it possible to simulate in-vivo dynamic conditions through the so-called “cancer-on-chip” platforms. Whereas several cancer types have been modeled through the cancer-on-chip approach, such as lung cancer and breast cancer, only a few works describing ovarian cancer models have been described. The aim of this work is to combine 3D bioprinting and microfluidic technics with setting up a 3D dynamic model of ovarian cancer. In the first phase, alginate-gelatin hydrogel containing SKOV3 cells was used to achieve tumor-like structures through an extrusion-based bioprinter. The desired form of the tumor-like mass was first designed on 3D CAD software. The hydrogel composition was then optimized for ensuring good and reproducible printability. Cell viability in the bioprinted structures was assessed using Live/Dead assay and WST1 assay. In the second phase, these bioprinted structures will be included in a microfluidic device that allows simultaneous testing of different drug concentrations. This microfluidic dispositive was first designed through computational fluid dynamics (CFD) simulations for fixing its precise dimensions. It was then be manufactured through a molding method based on a 3D printed template. To confirm the results of CFD simulations, doxorubicin (DOX) solutions were perfused through the dispositive and DOX concentration in each culture chamber was determined. Once completely characterized, this model will be used to assess the efficacy of anti-cancer nanoparticles developed in the Jean Lamour institute.

Keywords: 3D bioprinting, ovarian cancer, cancer-on-chip models, microfluidic techniques

Procedia PDF Downloads 196
913 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 299
912 Comparison of the Isolation Rates and Characteristics of Salmonella Isolated from Antibiotic-Free and Conventional Chicken Meat Samples

Authors: Jin-Hyeong Park, Hong-Seok Kim, Jin-Hyeok Yim, Young-Ji Kim, Dong-Hyeon Kim, Jung-Whan Chon, Kun-Ho Seo

Abstract:

Salmonella contamination in chicken samples can cause major health problems in humans. However, not only the effects of antibiotic treatment during growth but also the impacts of poultry slaughter line on the prevalence of Salmonella in final chicken meat sold to consumers are unknown. In this study, we compared the isolation rates and antimicrobial resistance of Salmonella between antibiotic-free, conventional, conventional Korean native retail chicken meat samples and clonal divergence of Salmonella isolates by multilocus sequence typing. In addition, the distribution of extended-spectrum β-lactamase (ESBL) genes in ESBL-producing Salmonella isolates was analyzed. A total of 72 retail chicken meat samples (n = 24 antibiotic-free broiler [AFB] chickens, n = 24 conventional broiler [CB] chickens, and n = 24 conventional Korean native [CK] chickens) were collected from local retail markets in Seoul, South Korea. The isolation rates of Salmonella were 66.6% in AFB chickens, 45.8% in CB chickens, and 25% in CK chickens. By analyzing the minimum inhibitory concentrations of β -lactam antibiotics with the disc-diffusion test, we found that 81.2% of Salmonella isolates from AFB chickens, 63.6% of isolates from CB chickens, and 50% of isolates from CK chickens were ESBL producers; all ESBL-positive isolates had the CTX-M-15 genotype. Interestingly, all ESBL-producing Salmonella were revealed as ST16 by multilocus sequence typing. In addition, all CTX-M-15-positive isolates had the genetic platform of blaCTX-M gene (IS26-ISEcp1-blaCTX-M-15-IS903), to the best of our knowledge, this is the first report in Salmonella around the world. The Salmonella ST33 strain (S. Hadar) isolated in this study has never been reported in South Korea. In conclusion, our findings showed that antibiotic-free retail chicken meat products were also largely contaminated with ESBL-producing Salmonella and that their ESBL genes and genetic platforms were the same as those isolated from conventional retail chicken meat products.

Keywords: antibiotic-free poultry, conventional poultry, multilocus sequence typing, extended-spectrum β-lactamase, antimicrobial resistance

Procedia PDF Downloads 277
911 Cellular RNA-Binding Domains with Distant Homology in Viral Proteomes

Authors: German Hernandez-Alonso, Antonio Lazcano, Arturo Becerra

Abstract:

Until today, viruses remain controversial and poorly understood; about their origin, this problem represents an enigma and one of the great challenges for the contemporary biology. Three main theories have tried to explain the origin of viruses: regressive evolution, escaped host gene, and pre-cellular origin. Under the perspective of the escaped host gene theory, it can be assumed a cellular origin of viral components, like protein RNA-binding domains. These universal distributed RNA-binding domains are related to the RNA metabolism processes, including transcription, processing, and modification of transcripts, translation, RNA degradation and its regulation. In the case of viruses, these domains are present in important viral proteins like helicases, nucleases, polymerases, capsid proteins or regulation factors. Therefore, they are implicated in the replicative cycle and parasitic processes of viruses. That is why it is possible to think that those domains present low levels of divergence due to selective pressures. For these reasons, the main goal for this project is to create a catalogue of the RNA-binding domains found in all the available viral proteomes, using bioinformatics tools in order to analyze its evolutionary process, and thus shed light on the general virus evolution. ProDom database was used to obtain larger than six thousand RNA-binding domain families that belong to the three cellular domains of life and some viral groups. From the sequences of these families, protein profiles were created using HMMER 3.1 tools in order to find distant homologous within greater than four thousand viral proteomes available in GenBank. Once accomplished the analysis, almost three thousand hits were obtained in the viral proteomes. The homologous sequences were found in proteomes of the principal Baltimore viral groups, showing interesting distribution patterns that can contribute to understand the evolution of viruses and their host-virus interactions. Presence of cellular RNA-binding domains within virus proteomes seem to be explained by closed interactions between viruses and their hosts. Recruitment of these domains is advantageous for the viral fitness, allowing viruses to be adapted to the host cellular environment.

Keywords: bioinformatics tools, distant homology, RNA-binding domains, viral evolution

Procedia PDF Downloads 387
910 Analyses of Copper Nanoparticles Impregnated Wood and Its Fungal Degradation Performance

Authors: María Graciela Aguayo, Laura Reyes, Claudia Oviedo, José Navarrete, Liset Gómez, Hugo Torres

Abstract:

Most wood species used in construction deteriorate when exposed to environmental conditions that favor wood-degrading organisms’ growth. Therefore, chemical protection by impregnation allows more efficient use of forest resources extending the wood useful life. A wood protection treatment which has attracted considerable interest in the scientific community during the last decade is wood impregnation with nano compounds. Radiata pine is the main wood species used in the Chilean construction industry, with total availability of 8 million m³ sawn timber. According to the requirements of the American Wood Protection Association (AWPA) and the Chilean Standards (NCh) radiata pine timber used in construction must be protected due to its low natural durability. In this work, the impregnation with copper nanoparticles (CuNP) was studied in terms of penetration and its protective effect against wood rot fungi. Two concentrations: 1 and 3 g/L of NPCu were applied by impregnation on radiata pine sapwood. Test penetration under AWPA A3-91 standard was carried out, and wood decay tests were performed according to EN 113, with slight modifications. The results of penetration for 1 g/L CuNP showed an irregular total penetration, and the samples impregnated with 3 g/L showed a total penetration with uniform concentration (blue color in all cross sections). The impregnation wood mass losses due to fungal exposure were significantly reduced, regardless of the concentration of the solution or the fungus. In impregnated wood samples, exposure to G. trabeum resulted ML values of 2.70% and 1.19% for 1 g/L and 3 g/L CuNP, respectively, and exposure to P. placenta resulted in 4.02% and 0.70%-ML values for 1 g/L and 3 g/L CuNP, respectively. In this study, the penetration analysis confirmed a uniform distribution inside the wood, and both concentrations were effective against the tested fungi, giving mass loss values lower than 5%. Therefore, future research in wood preservatives should focus on new nanomaterials that are more efficient and environmentally friendly. Acknowledgments: CONICYT FONDEF IDeA I+D 2019, grant number ID19I10122.

Keywords: copper nanoparticles, fungal degradation, radiata pine wood, wood preservation

Procedia PDF Downloads 199
909 Clinical and Molecular Characterization of Ichthyosis at King Abdulaziz Medical City, Riyadh KSA

Authors: Reema K. AlEssa, Sahar Alshomer, Abdullah Alfaleh, Sultan ALkhenaizan, Mohammed Albalwi

Abstract:

Ichthyosis is a disorder of abnormal keratinization, characterized by excessive scaling, and consists of more than twenty subtypes varied in severity, mode of inheritance, and the genes involved. There is insufficient data in the literature about the epidemiology and characteristics of ichthyosis locally. Our aim is to identify the histopathological features and genetic profile of ichthyosis. Method: It is an observational retrospective case series study conducted in March 2020, included all patients who were diagnosed with Ichthyosis and confirmed by histological and molecular findings over the last 20 years in King Abdulaziz Medical City (KAMC), Riyadh, Saudi Arabia. Molecular analysis was performed by testing genomic DNA and checking genetic variations using the AmpliSeq panel. All disease-causing variants were checked against HGMD, ClinVar, Genome Aggregation Database (gnomAD), and Exome Aggregation Consortium (ExAC) databases. Result: A total of 60 cases of Ichthyosis were identified with a mean age of 13 ± 9.2. There is an almost equal distribution between female patients 29 (48%) and males 31 (52%). The majority of them were Saudis, 94%. More than half of patients presented with general scaling 33 (55%), followed by dryness and coarse skin 19 (31.6%) and hyperlinearity 5 (8.33%). Family history and history of consanguinity were seen in 26 (43.3% ), 13 (22%), respectively. History of colloidal babies was found in 6 (10%) cases of ichthyosis. The most frequent genes were ALOX12B, ALOXE3, CERS3, CYP4F22, DOLK, FLG2, GJB2, PNPLA1, SLC27A4, SPINK5, STS, SUMF1, TGM1, TGM5, VPS33B. Most frequent variations were detected in CYP4F22 in 16 cases (26.6%) followed by ALOXE3 6 (10%) and STS 6 (10%) then TGM1 5 (8.3) and ALOX12B 5 (8.3). The analysis of molecular genetic identified 23 different genetic variations in the genes of ichthyosis, of which 13 were novel mutations. Homozygous mutations were detected in the majority of ichthyosis cases, 54 (90%), and only 1 case was heterozygous. Few cases, 4 (6.6%) had an unknown type of ichthyosis with a negative genetic result. Conclusion: 13 novel mutations were discovered. Also, about half of ichthyosis patients had a positive history of consanguinity.

Keywords: ichthyosis, genetic profile, molecular characterization, congenital ichthyosis

Procedia PDF Downloads 197
908 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation

Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos

Abstract:

The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.

Keywords: air pollution, passive samplers, interferometry, indoor, outdoor

Procedia PDF Downloads 398
907 Evaluation of Some Trace Elements in Biological Samples of Egyptian Viral Hepatitis Patients under Nutrition Therapy

Authors: Tarek Elnimr, Reda Morsy, Assem El Fert, Aziza Ismail

Abstract:

Hepatitis is an inflammation of the liver. The condition can be self-limiting or can progress to fibrosis, cirrhosis or liver cancer. Disease caused by the hepatitis virus, the virus can cause hepatitis infection, ranging in severity from a mild illness lasting a few weeks to a serious, lifelong illness. A growing body of evidence indicates that many trace elements play important roles in a number of carcinogenic processes that proceed with various mechanisms. To examine the status of trace elements during the development of hepatic carcinoma, we determined the iron, copper, zinc and selenium levels in some biological samples of patients at different stages of viral hepatic disease. We observed significant changes in the iron, copper, zinc and selenium levels in the biological samples of patients hepatocellular carcinoma, relative to those of healthy controls. The mean hair, nail, RBC, serum and whole blood copper levels in patients with hepatitis virus were significantly higher than that of the control group. In contrast the mean iron, zinc, and selenium levels in patients having hepatitis virus were significantly lower than those of the control group. On the basis of this study, we identified the impact of natural supplements to improve the treatment of viral liver damage, using the level of some trace elements such as, iron, copper, zinc and selenium, which might serve as biomarkers for increases survival and reduces disease progression. Most of the elements revealed diverse and random distribution in the samples of the donor groups. The correlation study pointed out significant disparities in the mutual relationships among the trace elements in the patients and controls. Principal component analysis and cluster analysis of the element data manifested diverse apportionment of the selected elements in the scalp hair, nail and blood components of the patients compared with the healthy counterparts.

Keywords: hepatitis, hair, nail, blood components, trace element, nutrition therapy, multivariate analysis, correlation, ICP-MS

Procedia PDF Downloads 408
906 Optimization of Process Parameters and Modeling of Mass Transport during Hybrid Solar Drying of Paddy

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying is one of the most critical unit operations for prolonging the shelf-life of food grains in order to ensure global food security. Photovoltaic integrated solar dryers can be a sustainable solution for replacing energy intensive thermal dryers as it is capable of drying in off-sunshine hours and provide better control over drying conditions. But, performance and reliability of PV based solar dryers depend hugely on climatic conditions thereby, drastically affecting process parameters. Therefore, to ensure quality and prolonged shelf-life of paddy, optimization of process parameters for solar dryers is critical. Proper moisture distribution within the grains is most detrimental factor to enhance the shelf-life of paddy therefore; modeling of mass transport can help in providing a better insight of moisture migration. Hence, present work aims at optimizing the process parameters and to develop a 3D finite element model (FEM) for predicting moisture profile in paddy during solar drying. Optimization of process parameters (power level, air velocity and moisture content) was done using box Behnken model in Design expert software. Furthermore, COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Optimized model for drying paddy was found to be 700W, 2.75 m/s and 13% wb with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Furthermore, 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product to achieve global food and energy security

Keywords: finite element modeling, hybrid solar drying, mass transport, paddy, process optimization

Procedia PDF Downloads 139
905 Quo Vadis, European Football: An Analysis of the Impact of Over-The-Top Services in the Sports Rights Market

Authors: Farangiz Davranbekova

Abstract:

Subject: The study explores the impact of Over-the-Top services in the sports rights market, focusing on football games. This impact is analysed in the big five European football markets. The research entails how the pay-TV market is combating the disruptors' entry, how the fans are adjusting to these changes and how leagues and football clubs are orienting in the transitional period of more choice. Aims and methods: The research aims to offer a general overview of the impact of OTT players in the football rights market. A theoretical framework of Jenkins’ five layers of convergence is implemented to analyse the transition the sports rights market is witnessing from various angles. The empirical analysis consists of secondary research data as and seven expert interviews from three different clusters. The findings are bound by the combination of the two methods offering general statements. Findings: The combined secondary data as well as expert interviews, conducted on five layers of convergence found: 1. Technological convergence presents that football content is accessible through various devices with innovative digital features, unlike the traditional TV set box. 2. Social convergence demonstrates that football fans multitask using various devices on social media when watching the games. These activities are complementary to traditional TV viewing. 3. Cultural convergence points that football fans have a new layer of fan engagement with leagues, clubs and other fans using social media. Additionally, production and consumption lines are blurred. 4. Economic convergence finds that content distribution is diversifying and/or eroding. Consumers now have more choices, albeit this can be harmful to them. Entry barriers are decreased, and bigger clubs feel more powerful. 5. Global convergence shows that football fans are engaging with not only local fans but with fans around the world that social media sites enable. Recommendation: A study on smaller markets such as Belgium or the Netherlands would benefit the study on the impact of OTT. Additionally, examination of other sports will shed light on this matter. Lastly, once the direct-to-consumer model is fully taken off in Europe, it will be of importance to examine the impact of such transformation in the market.

Keywords: sports rights, OTT, pay TV, football

Procedia PDF Downloads 157
904 The Effect of Three-Dimensional Morphology on Vulnerability Assessment of Atherosclerotic Plaque

Authors: M. Zareh, H. Mohammadi, B. Naser

Abstract:

Atherosclerotic plaque rupture is the main trigger of heart attack and brain stroke which are the leading cause of death in developed countries. Better understanding of rupture-prone plaque can help clinicians detect vulnerable plaques- rupture prone or instable plaques- and apply immediate medical treatment to prevent these life-threatening cardiovascular events. Therefore, there are plenty of studies addressing disclosure of vulnerable plaques properties. Necrotic core and fibrous tissue are two major tissues constituting atherosclerotic plaque; using histopathological and numerical approaches, many studies have demonstrated that plaque rupture is strongly associated with a large necrotic core and a thin fibrous cap, two morphological characteristic which can be acquired by two-dimensional imaging of atherosclerotic plaque present in coronary and carotid arteries. Plaque rupture is widely considered as a mechanical failure inside plaque tissue; this failure occurs when the stress within plaque excesses the strength of tissue material; hence, finite element method, a strong numerical approach, has been extensively applied to estimate stress distribution within plaques with different compositions which is then used for assessment of various vulnerability characteristics including plaque morphology, material properties and blood pressure. This study aims to evaluate significance of three-dimensional morphology on vulnerability degree of atherosclerotic plaque. To reach this end, different two-dimensional geometrical models of atherosclerotic plaques are considered based on available data and named Main 2D Models (M2M). Then, for each of these M2Ms, two three-dimensional idealistic models are created. These two 3D models represent two possible three-dimensional morphologies which might exist for a plaque with similar 2D morphology to one of M2Ms. Finite element method is employed to estimate stress, von-Mises stress, within each 3D models. Results indicate that for each M2Ms stress can significantly varies due to possible 3D morphological changes in that plaque. Also, our results show that an atherosclerotic plaque with thick cap may experience rupture if it has a critical 3D morphology. This study highlights the effect of 3D geometry of plaque on its instability degree and suggests that 3D morphology of plaque might be necessary to more effectively and accurately assess atherosclerotic plaque vulnerability.

Keywords: atherosclerotic plaque, plaque rupture, finite element method, 3D model

Procedia PDF Downloads 308
903 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 302
902 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers

Authors: Elena Carcano, James Ball, Betty Tiko

Abstract:

Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.

Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow

Procedia PDF Downloads 40
901 Compliance Of Dialysis patients With Nutrition Guidelines: Insights From A Questionnaire

Authors: Zeiler M., Stadler D., Schmaderer C.

Abstract:

Over the years of dialysis treatment, most patients experience significant weight loss. The primary emphasis in earlier research was the underlying mechanism of protein energy wasting and the subsequent malnutrition inflammation syndrome. In the interest to provide an effective and rapid solution for the patients, the aim of this study is identifying individual influences of their assumed reduced dietary intake, such as nausea, appetite loss and taste changes, and to determine whether the patients adhere to their nutrition guidelines. A prospective, controlled study with 38 end-stage renal disease patients was performed using a questionnaire to reflect their diet within the last 12 months. Thereby, the daily intake for the most important macro-and micronutrients was calculated to be compared with the individual KDQOI-guideline value, as well as controls matched in age and gender. The majority of the study population did not report symptoms commonly associated with dialysis, such as nausea or inappetence, and denied any change in dietary behavior since receiving renal replacement therapy. The patients’ daily intake of energy (3080kcal ± 1266) and protein (89,9g [53,4-142,0]) did not differ significantly from the controls (energy intake: 3233kcal ± 1046, p=0,597; protein intake: 103,7g [90,1-125,5], p=0,120). The average difference to the individual calculated KDQOI-guideline was +176,0kcal ± 1156 (p=0,357) for energy intake and -1,75g ± 45,9 (p=0,491) for protein intake. However, there was an observed imbalance in the distribution of macronutrients, with a preference for fats over proteins. The patients’ daily intake of sodium (5,4g [ 2,95-10,1]) was higher than in the controls (4,1g [2,04-5,99], p= 0,058) whereas both values for potassium (3,7g ± 1,84) and phosphorous (1,79g ± 0,91) went significantly below the controls’ values (potassium intake: 4,89g ± 1,74, p=0,014; phosphorous intake: 2,04g ± 0,64, p=0,038). Thus, the values exceeded the calculated KDQOI-recommendation by + 3,3g [0,63-7,90] (p<0,001) for sodium, +1,49g ± 1,84 (p<0,001) for potassium and +0,89g ± 0,91 (p<0,001) for phosphorous. Contrary to the assumption, the patients did not under-eat. Nevertheless, their diets did not align with the recommended values. These findings highlight the need for intervention and education among patients and that regular dietary monitoring could prevent unhealthy nutrition habits. The elaboration of individual references instead of standardized guidelines could increase the compliance to the advised diet so that interdisciplinary comorbidities do not develop or worsen.

Keywords: compliance, dialysis, end-stage renal disease, KDQOI, malnutrition, nutrition guidelines, questionnaire, salt intake

Procedia PDF Downloads 68
900 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 392
899 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 150
898 Post-harvest Handling Practices and Technologies Harnessed by Smallholder Fruit Crop Farmers in Vhembe District, Limpopo Province, South Africa

Authors: Vhahangwele Belemu, Isaac Busayo Oluwatayo

Abstract:

Post-harvest losses pose a serious challenge to smallholder fruit crop farmers, especially in the rural communities of South Africa, affecting their economic livelihoods and food security. This study investigated the post-harvest handling practices and technologies harnessed by smallholder fruit crop farmers in the Vhembe district of Limpopo province, South Africa. Data were collected on a random sample of 224 smallholder fruit crop farmers selected from the four municipalities of the district using a multistage sampling technique. Analytical tools employed include descriptive statistics and the tobit regression model. A descriptive analysis of farmers’ socioeconomic characteristics showed that a sizeable number of these farmers are still in their active working age (mean = 52 years) with more males (63.8%) than their female (36.2%) counterparts. Respondents’ distribution by educational status revealed that only a few of these had no formal education (2.2%), with the majority having secondary education (48.7%). Results of data analysis further revealed that the prominent post-harvest technologies and handling practices harnessed by these farmers include using appropriate harvesting techniques (20.5%), selling at a reduced price (19.6%), transportation consideration (18.3%), cleaning and disinfecting (17.9%), sorting and grading (16.5%), manual cleaning (15.6%) and packaging technique (11.6%) among others. The result of the Tobit regression analysis conducted to examine the determinants of post-harvest technologies and handling practices harnessed showed that age, educational status of respondents, awareness of technology/handling practices, farm size, access to credit, extension contact, and membership of association were the significant factors. The study suggests enhanced awareness creation, access to credit facility and improved access to market as important factors to consider by relevant stakeholders to assist smallholder fruit crop farmers in the study area.

Keywords: fruit crop farmers, handling practices, post harvest losses, smallholder, Vhembe District, South Africa

Procedia PDF Downloads 56
897 Hardness map of Human Tarsals, Meta Tarsals and Phalanges of Toes

Authors: Irfan Anjum Manarvi, Zahid Ali kaimkhani

Abstract:

Predicting location of the fracture in human bones has been a keen area of research for the past few decades. A variety of tests for hardness, deformation, and strain field measurement have been conducted in the past; but considered insufficient due to various limitations. Researchers, therefore, have proposed further studies due to inaccuracies in measurement methods, testing machines, and experimental errors. Advancement and availability of hardware, measuring instrumentation, and testing machines can now provide remedies to these limitations. The human foot is a critical part of the body exposed to various forces throughout its life. A number of products are developed for using it for protection and care, which many times do not provide sufficient protection and may itself become a source of stress due to non-consideration of the delicacy of bones in the feet. A continuous strain or overloading on feet may occur resulting to discomfort and even fracture. Mechanical properties of Tarsals, Metatarsals, and phalanges are, therefore, the primary area of consideration for all such design applications. Hardness is one of the mechanical properties which are considered very important to establish the mechanical resistance behavior of a material against applied loads. Past researchers have worked in the areas of investigating mechanical properties of these bones. However, their results were based on a limited number of experiments and taking average values of hardness due to either limitation of samples or testing instruments. Therefore, they proposed further studies in this area. The present research has been carried out to develop a hardness map of the human foot by measuring micro hardness at various locations of these bones. Results are compiled in the form of distance from a reference point on a bone and the hardness values for each surface. The number of test results is far more than previous studies and are spread over a typical bone to give a complete hardness map of these bones. These results could also be used to establish other properties such as stress and strain distribution in the bones. Also, industrial engineers could use it for design and development of various accessories for human feet health care and comfort and further research in the same areas.

Keywords: tarsals, metatarsals, phalanges, hardness testing, biomechanics of human foot

Procedia PDF Downloads 421
896 Assessment and Optimisation of Building Services Electrical Loads for Off-Grid or Hybrid Operation

Authors: Desmond Young

Abstract:

In building services electrical design, a key element of any project will be assessing the electrical load requirements. This needs to be done early in the design process to allow the selection of infrastructure that would be required to meet the electrical needs of the type of building. The type of building will define the type of assessment made, and the values applied in defining the maximum demand for the building, and ultimately the size of supply or infrastructure required, and the application that needs to be made to the distribution network operator, or alternatively to an independent network operator. The fact that this assessment needs to be undertaken early in the design process provides limits on the type of assessment that can be used, as different methods require different types of information, and sometimes this information is not available until the latter stages of a project. A common method applied in the earlier design stages of a project, typically during stages 1,2 & 3, is the use of benchmarks. It is a possibility that some of the benchmarks applied are excessive in relation to the current loads that exist in a modern installation. This lack of accuracy is based on information which does not correspond to the actual equipment loads that are used. This includes lighting and small power loads, where the use of more efficient equipment and lighting has reduced the maximum demand required. The electrical load can be used as part of the process to assess the heat generated from the equipment, with the heat gains from other sources, this feeds into the sizing of the infrastructure required to cool the building. Any overestimation of the loads would contribute to the increase in the design load for the heating and ventilation systems. Finally, with the new policies driving the industry to decarbonise buildings, a prime example being the recently introduced London Plan, loads are potentially going to increase. In addition, with the advent of the pandemic and changes to working practices, and the adoption of electric heating and vehicles, a better understanding of the loads that should be applied will aid in ensuring that infrastructure is not oversized, as a cost to the client, or undersized to the detriment of the building. In addition, more accurate benchmarks and methods will allow assessments to be made for the incorporation of energy storage and renewable technologies as these technologies become more common in buildings new or refurbished.

Keywords: energy, ADMD, electrical load assessment, energy benchmarks

Procedia PDF Downloads 112
895 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 435
894 Development of Novel Amphiphilic Block Copolymer of Renewable ε-Decalactone for Drug Delivery Application

Authors: Deepak Kakde, Steve Howdle, Derek Irvine, Cameron Alexander

Abstract:

The poor aqueous solubility is one of the major obstacles in the formulation development of many drugs. Around 70% of drugs are poorly soluble in aqueous media. In the last few decades, micelles have emerged as one of the major tools for solubilization of hydrophobic drugs. Micelles are nanosized structures (10-100nm) obtained by self-assembly of amphiphilic molecules into the water. The hydrophobic part of the micelle forms core which is surrounded by a hydrophilic outer shell called corona. These core-shell structures have been used as a drug delivery vehicle for many years. Although, the utility of micelles have been reduced due to the lack of sustainable materials. In the present study, a novel methoxy poly(ethylene glycol)-b-poly(ε-decalactone) (mPEG-b-PεDL) copolymer was synthesized by ring opening polymerization (ROP) of renewable ε-decalactone (ε-DL) monomers on methoxy poly(ethylene glycol) (mPEG) initiator using 1,5,7-triazabicyclo[4.4.0]dec-5-ene (TBD) as a organocatalyst. All the reactions were conducted in bulk to avoid the use of toxic organic solvents. The copolymer was characterized by nuclear magnetic resonance spectroscopy (NMR), gel permeation chromatography (GPC) and differential scanning calorimetry (DSC).The mPEG-b-PεDL block copolymeric micelles containing indomethacin (IND) were prepared by nanoprecipitation method and evaluated as drug delivery vehicle. The size of the micelles was less than 40nm with narrow polydispersity pattern. TEM image showed uniform distribution of spherical micelles defined by clear surface boundary. The indomethacin loading was 7.4% for copolymer with molecular weight of 13000 and drug/polymer weight ratio of 4/50. The higher drug/polymer ratio decreased the drug loading. The drug release study in PBS (pH7.4) showed a sustained release of drug over a period of 24hr. In conclusion, we have developed a new sustainable polymeric material for IND delivery by combining the green synthetic approach with the use of renewable monomer for sustainable development of polymeric nanomedicine.

Keywords: dopolymer, ε-decalactone, indomethacin, micelles

Procedia PDF Downloads 295
893 An Agile, Intelligent and Scalable Framework for Global Software Development

Authors: Raja Asad Zaheer, Aisha Tanveer, Hafza Mehreen Fatima

Abstract:

Global Software Development (GSD) is becoming a common norm in software industry, despite of the fact that global distribution of the teams presents special issues for effective communication and coordination of the teams. Now trends are changing and project management for distributed teams is no longer in a limbo. GSD can be effectively established using agile and project managers can use different agile techniques/tools for solving the problems associated with distributed teams. Agile methodologies like scrum and XP have been successfully used with distributed teams. We have employed exploratory research method to analyze different recent studies related to challenges of GSD and their proposed solutions. In our study, we had deep insight in six commonly faced challenges: communication and coordination, temporal differences, cultural differences, knowledge sharing/group awareness, speed and communication tools. We have established that each of these challenges cannot be neglected for distributed teams of any kind. They are interlinked and as an aggregated whole can cause the failure of projects. In this paper we have focused on creating a scalable framework for detecting and overcoming these commonly faced challenges. In the proposed solution, our objective is to suggest agile techniques/tools relevant to a particular problem faced by the organizations related to the management of distributed teams. We focused mainly on scrum and XP techniques/tools because they are widely accepted and used in the industry. Our solution identifies the problem and suggests an appropriate technique/tool to help solve the problem based on globally shared knowledgebase. We can establish a cause and effect relationship using a fishbone diagram based on the inputs provided for issues commonly faced by organizations. Based on the identified cause, suitable tool is suggested, our framework suggests a suitable tool. Hence, a scalable, extensible, self-learning, intelligent framework proposed will help implement and assess GSD to achieve maximum out of it. Globally shared knowledgebase will help new organizations to easily adapt best practices set forth by the practicing organizations.

Keywords: agile project management, agile tools/techniques, distributed teams, global software development

Procedia PDF Downloads 314
892 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 133