Search results for: and additional trim required
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6815

Search results for: and additional trim required

1055 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 266
1054 Effectiveness of a Healthy Lifestyle Combined with Abdominal Massage on Treating Infertility Due to Endometriosis and Adhesions in the Fallopian Tubes

Authors: Flora Tajiki

Abstract:

Undoubtedly, the desire to experience the beauty of motherhood is a dream for every woman, and delays in achieving this can have significant psychological consequences. Endometriosis, which is the presence of endometrial tissue in organs other than the uterus, can cause infertility through adhesion and inflammation. The fallopian tubes play a crucial role in transferring the egg to the uterus; if adhesions are present, the chances of natural pregnancy decrease, while the likelihood of ectopic pregnancy and miscarriage increases. In cases of mild adhesions observed during hysterosalpingography or laparoscopy, the tubes may open, but in severe adhesions, this is usually not possible. The aim of this study is to assess the effectiveness of a healthy lifestyle combined with massage of the uterine and ovarian areas in relieving adhesions in the fallopian tubes and treating the complications of endometriosis. This case study focuses on a 33-year-old woman, who married at 20, and experienced a miscarriage five years ago that required curettage. Following this, a hysterosalpingography revealed blockages in both fallopian tubes. A laparoscopic examination also indicated endometriosis and specialists in infertility ruled out the possibility of natural pregnancy. Three years ago, she underwent an unsuccessful IVF procedure. Two years ago, she began a lifestyle modification program that included improving sleep patterns, eliminating sugar and preservatives, avoiding red meat and gluten, eating a balanced diet, walking, exercising, and incorporating beneficial foods like olive oil, almonds, and nutritious vegetables, along with abdominal massage using chamomile oil. She also took vitamin C and vitamin D supplements. After approximately twenty weeks of these methods, and given that infertility centers had indicated that surgery and repeated IVF were the only options for her to conceive, she became pregnant naturally and had a successful pregnancy and delivery. Endometriosis is one of the significant factors contributing to infertility and adhesions in the fallopian tubes and uterus, and unfortunately, it has no definitive cure and can recur even after surgery. The treatment of similar cases emphasizes lifestyle modifications, and this approach has proven to be both cost-effective and harmless. Therefore, it seems essential to focus on this treatment strategy.

Keywords: infertility, endometriosis, adhesions, fallopian tubes, healthy lifestyle, lifestyle modifications, abdominal massage, case study, natural pregnancy, ivf, psychological consequences, uterine health, complementary treatments, nutrition, women's health.

Procedia PDF Downloads 19
1053 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus

Authors: Debamitra Chakravorty, Pratap K. Parida

Abstract:

Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.

Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design

Procedia PDF Downloads 140
1052 The Impact of the Method of Extraction on 'Chemchali' Olive Oil Composition in Terms of Oxidation Index, and Chemical Quality

Authors: Om Kalthoum Sallem, Saidakilani, Kamiliya Ounaissa, Abdelmajid Abid

Abstract:

Introduction and purposes: Olive oil is the main oil used in the Mediterranean diet. Virgin olive oil is valued for its organoleptic and nutritional characteristics and is resistant to oxidation due to its high monounsaturated fatty acid content (MUFAs), and low polyunsaturates (PUFAs) and the presence of natural antioxidants such as phenols, tocopherols and carotenoids. The fatty acid composition, especially the MUFA content, and the natural antioxidants provide advantages for health. The aim of the present study was to examine the impact of method of extraction on the chemical profiles of ‘Chemchali’ olive oil variety, which is cultivated in the city of Gafsa, and to compare it with chetoui and chemchali varieties. Methods: Our study is a qualitative prospective study that deals with ‘Chemchali’ olive oil variety. Analyses were conducted during three months (from December to February) in different oil mills in the city of Gafsa. We have compared ‘Chemchali’ olive oil obtained by continuous method to this obtained by superpress method. Then we have analyzed quality index parameters, including free fatty acid content (FFA), acidity, and UV spectrophotometric characteristics and other physico-chemical data [oxidative stability, ß-carotene, and chlorophyll pigment composition]. Results: Olive oil resulting from super press method compared with continuous method is less acid(0,6120 vs. 0,9760), less oxydazible(K232:2,478 vs. 2,592)(k270:0,216 vs. 0,228), more rich in oleic acid(61,61% vs. 66.99%), less rich in linoleic acid(13,38% vs. 13,98 %), more rich in total chlorophylls pigments (6,22 ppm vs. 3,18 ppm ) and ß-carotene (3,128 mg/kg vs. 1,73 mg/kg). ‘Chemchali’ olive oil showed more equilibrated total content in fatty acids compared with the varieties ’Chemleli’ and ‘Chetoui’. Gafsa’s variety ’Chemlali’ have significantly less saturated and polyunsaturated fatty acids. Whereas it has a higher content in monounsaturated fatty acid C18:2, compared with the two other varieties. Conclusion: The use of super press method had benefic effects on general chemical characteristics of ‘Chemchali’ olive oil, maintaining the highest quality according to the ecocert legal standards. In light of the results obtained in this study, a more detailed study is required to establish whether the differences in the chemical properties of oils are mainly due to agronomic and climate variables or, to the processing employed in oil mills.

Keywords: olive oil, extraction method, fatty acids, chemchali olive oil

Procedia PDF Downloads 383
1051 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 171
1050 Assessment of Routine Health Information System (RHIS) Quality Assurance Practices in Tarkwa Sub-Municipal Health Directorate, Ghana

Authors: Richard Okyere Boadu, Judith Obiri-Yeboah, Kwame Adu Okyere Boadu, Nathan Kumasenu Mensah, Grace Amoh-Agyei

Abstract:

Routine health information system (RHIS) quality assurance has become an important issue, not only because of its significance in promoting a high standard of patient care but also because of its impact on government budgets for the maintenance of health services. A routine health information system comprises healthcare data collection, compilation, storage, analysis, report generation, and dissemination on a routine basis in various healthcare settings. The data from RHIS give a representation of health status, health services, and health resources. The sources of RHIS data are normally individual health records, records of services delivered, and records of health resources. Using reliable information from routine health information systems is fundamental in the healthcare delivery system. Quality assurance practices are measures that are put in place to ensure the health data that are collected meet required quality standards. Routine health information system quality assurance practices ensure that data that are generated from the system are fit for use. This study considered quality assurance practices in the RHIS processes. Methods: A cross-sectional study was conducted in eight health facilities in Tarkwa Sub-Municipal Health Service in the western region of Ghana. The study involved routine quality assurance practices among the 90 health staff and management selected from facilities in Tarkwa Sub-Municipal who collected or used data routinely from 24th December 2019 to 20th January 2020. Results: Generally, Tarkwa Sub-Municipal health service appears to practice quality assurance during data collection, compilation, storage, analysis and dissemination. The results show some achievement in quality control performance in report dissemination (77.6%), data analysis (68.0%), data compilation (67.4%), report compilation (66.3%), data storage (66.3%) and collection (61.1%). Conclusions: Even though the Tarkwa Sub-Municipal Health Directorate engages in some control measures to ensure data quality, there is a need to strengthen the process to achieve the targeted percentage of performance (90.0%). There was a significant shortfall in quality assurance practices performance, especially during data collection, with respect to the expected performance.

Keywords: quality assurance practices, assessment of routine health information system quality, routine health information system, data quality

Procedia PDF Downloads 79
1049 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste

Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton

Abstract:

Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.

Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management

Procedia PDF Downloads 244
1048 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 183
1047 Men’s Attendance in Labour and Birth Room: A Choice and Coercion in Childbirth

Authors: A/Prof Marjan Khajehei

Abstract:

In the last century, the role of fathers in the birth has changed exponentially. Before the 1970s, the principal view was that birth was a female business and not a man’s place. Changing cultural and professional attitudes around the emotional bond between a man and a woman, family structure and the more proactive involved role of men in the family have encouraged fathers’ attendance at birth. There is evidence that fathers’ support can make birthing less traumatic for some women and can make couples closer. This has made some clinicians to believe the fathers should be more involved throughout the birth process. Some clinicians even go further and ask the fathers to watch the medical procedures, such as inserting vaginal speculum, forceps or vacuum, episiotomy and stitches. Although birth can unfold like a beautiful picture captured by birth photographers, with fathers massaging women’s backs by candle light and the miraculous moment of birth, it can be overshadowed by less attractive images of cervical mucous, emptying bowels and the invasive medical procedures. What happens in the birth room and the fathers’ reaction to the graphic experience of birthing can be unpredictable. Despite the fact that most men are absolutely thrilled to be in the delivery room, for some men, a very intimate body part can become completely desexualised, and they can experience psychological and sexual scarring. They see someone they cherish dramatically sliced open and can then associate their partners with a disturbing scene, and it can dramatically affect their relationships. While most women want the expectant fathers by their side for this life-changing event, not all of them may be happy for their partners to watch the perineum to be cut or stitched or when large blades of forceps are inserted inside the vagina. Anecdotal reports have shown that consent is not sought from the labouring women as to whether they want their partners to watch these procedures. The majority of research1, 2, 3 focuses on men’s and women’s retrospective attitudes towards their birth experience. However, the effect of witnessing invasive procedures during childbirth on a man's attraction to his partner, while she is most vulnerable, and also an increased risk of post-traumatic stress disorder in fathers have not been widely investigated. There is a lack of sufficient research investigating whether women need to be asked for their consent before inviting their partners to closely watch medical procedures during childbirth. Future research is required to provide a basis for better awareness and involve the consumers to understanding the men’s and women’s experience and their expectations for labour and birth.

Keywords: birth, childbirth, father, labour, men, women

Procedia PDF Downloads 127
1046 Close-Reading Works of Art and the Ideal of Naïveté: Elements of an Anti-Cartesian Approach to Humanistic Liberal Education

Authors: Peter Hajnal

Abstract:

The need to combine serious training in disciplinary/scholarly approaches to problems of general significance with an educational experience that engages students with these very same problems on a personal level is one of the key challenges facing modern liberal education in the West. The typical approach to synthesizing these two goals, one highly abstract, the other elusively practical, proceeds by invoking ideals traditionally associated with Enlightenment and 19th century “humanism”. These ideas are in turn rooted in an approach to reality codified by Cartesianism and the rise of modern science. Articulating this connection of the modern humanist tradition with Cartesianism allows for demonstrating how the central problem of modern liberal education is rooted in the strict separation of knowledge and personal experience inherent in the dualism of Descartes. The question about the shape of contemporary liberal education is, therefore, the same as asking whether an anti-Cartesian version of liberal education is possible at all. Although the formulation of a general answer to this question is a tall order (whether in abstract or practical terms), and might take different forms (nota bene in Eastern and Western contexts), a key inspiration may be provided by a certain shift of attitude towards the Cartesian conception of the relationship of knowledge and experience required by discussion based close-reading of works of visual art. Taking the work of Stanley Cavell as its central inspiration, my paper argues that this shift of attitude in question is best described as a form of “second naïveté”, and that it provides a useful model of conceptualizing in more concrete terms the appeal for such a “second naïveté” expressed in recent writings on the role of various disciplines in organizing learning by philosophers of such diverse backgrounds and interests as Hilary Putnam and Bruno Latour. The adoption of naïveté so identified as an educational ideal may be seen as a key instrument in thinking of the educational context as itself a medium of synthesis of the contemplative and the practical. Moreover, it is helpful in overcoming the bad dilemma of ideological vs. conservative approaches to liberal education, as well as in correcting a certain commonly held false view of the historical roots of liberal education in the Renaissance, which turns out to offer much more of a sui generis approach to practice rather than represent a mere precursor to the Cartesian conception.

Keywords: liberal arts, philosophy, education, Descartes, naivete

Procedia PDF Downloads 191
1045 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 102
1044 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator

Authors: Gómez R. Marta, Martín M. Jesús María

Abstract:

The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.

Keywords: atmospheric dispersion, dioxin, furan, incinerator

Procedia PDF Downloads 217
1043 Efficient Field-Oriented Motor Control on Resource-Constrained Microcontrollers for Optimal Performance without Specialized Hardware

Authors: Nishita Jaiswal, Apoorv Mohan Satpute

Abstract:

The increasing demand for efficient, cost-effective motor control systems in the automotive industry has driven the need for advanced, highly optimized control algorithms. Field-Oriented Control (FOC) has established itself as the leading approach for motor control, offering precise and dynamic regulation of torque, speed, and position. However, as energy efficiency becomes more critical in modern applications, implementing FOC on low-power, cost-sensitive microcontrollers pose significant challenges due to the limited availability of computational and hardware resources. Currently, most solutions rely on high-performance 32-bit microcontrollers or Application-Specific Integrated Circuits (ASICs) equipped with Floating Point Units (FPUs) and Hardware Accelerated Units (HAUs). These advanced platforms enable rapid computation and simplify the execution of complex control algorithms like FOC. However, these benefits come at the expense of higher costs, increased power consumption, and added system complexity. These drawbacks limit their suitability for embedded systems with strict power and budget constraints, where achieving energy and execution efficiency without compromising performance is essential. In this paper, we present an alternative approach that utilizes optimized data representation and computation techniques on a 16-bit microcontroller without FPUs or HAUs. By carefully optimizing data point formats and employing fixed-point arithmetic, we demonstrate how the precision and computational efficiency required for FOC can be maintained in resource-constrained environments. This approach eliminates the overhead performance associated with floating-point operations and hardware acceleration, providing a more practical solution in terms of cost, scalability and improved execution time efficiency, allowing faster response in motor control applications. Furthermore, it enhances system design flexibility, making it particularly well-suited for applications that demand stringent control over power consumption and costs.

Keywords: field-oriented control, fixed-point arithmetic, floating point unit, hardware accelerator unit, motor control systems

Procedia PDF Downloads 15
1042 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 180
1041 Asparagus racemosus Willd for Enhanced Medicinal Properties

Authors: Ashok Kumar, Parveen Parveen

Abstract:

India is bestowed with an extremely high population of plant species with medicinal value and even has two biodiversity hotspots. Indian systems of medicine including Ayurveda, Siddha and Unani have historically been serving humankind across the world since time immemorial. About 1500 plant species have well been documented in Ayurvedic Nighantus as official medicinal plants. Additionally, several hundred species of plants are being routinely used as medicines by local people especially tribes living in and around forests. The natural resources for medicinal plants have unscientifically been over-exploited forcing rapid depletion in their genetic diversity. Moreover, renewed global interest in herbal medicines may even lead to additional depletion of medicinal plant wealth of the country, as about 95% collection of medicinal plants for pharmaceutical preparation is being carried out from natural forests. On the other hand, huge export market of medicinal and aromatic plants needs to be seriously tapped for enhancing inflow of foreign currency. Asparagus racemosus Willd., a member of family Liliaceae, is one of thirty-two plant species that have been identified as priority species for cultivation and conservation by the National Medicinal Plant Board (NMPB), Government of India. Though attention is being focused on standardization of agro-techniques and extraction methods, little has been designed on genetic improvement and selection of desired types with higher root production and saponin content, a basic ingredient of medicinal value. The saponin not only improves defense mechanisms and controls diabetes but the roots of this species promote secretion of breast milk, improved lost body weight and considered as an aphrodisiac. There is ample scope for genetic improvement of this species for enhancing productivity substantially, qualitatively and quantitatively. It is emphasized to select desired genotypes with sufficient genetic diversity for important economic traits. Hybridization between two genetically divergent genotypes could result in the synthesis of new F1 hybrids consisting of useful traits of both the parents. The evaluation of twenty seed sources of Asparagus racemosus assembled different geographical locations of India revelled high degree of variability for traits of economic importance. The maximum genotypic and phenotypic variance was observed for shoot height among shoot related traits and for root length among root related traits. The shoot height, genotypic variance, phenotypic variance, genotypic coefficient of variance, the phenotypic coefficient of variance was recorded to be 231.80, 3924.80, 61.26 and 1037.32, respectively, where those of the root length were 9.55, 16.80, 23.46 and 41.27, respectively. The maximum genetic advance and genetic gain were obtained for shoot height among shoot-related traits and root length among root-related traits. Index values were developed for all seed sources based on the four most important traits, and Panthnagar (Uttrakhand), Jodhpur (Rajasthan), Dehradun (Uttarakhand), Chandigarh (Punjab), Jammu (Jammu & Kashmir) and Solan (Himachal Pradesh) were found to be promising seed sources.

Keywords: asparagus, genetic, genotypes, variance

Procedia PDF Downloads 134
1040 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 302
1039 Oncological and Antiresorptive Treatment of Breast Cancer: Dental Assessment and Risk of MRONJ Development

Authors: Magdalena Korytowska, Gunnar Lengstrand, Cecilia Larsson Wexell

Abstract:

Background: Breast cancer (BC) is the most common cancer among women worldwide, and cases are continuing to increase in Sweden. Bone is the most common metastatic site in breast cancer patients, where > 65-75% of women with advanced breast cancer develop bone metastases during their disease. To prevent the skeletal-related events of metastases (e.g., pathological fractures, bone loss, cancer-induced bone pain, and hypercalcemia bone), two different classes of antiresorptive medications (AR), bisphosphonate and denosumab are typically administered every 3 to 4 weeks. Since 2015, adjuvant bisphosphonate treatment has been used every six months for three to five years in postmenopausal women for the prevention of skeletal metastases and improved survival. Methods: A case-control study was conducted to test the hypotheses that patients treated with high-dose AR are at higher risk of developing MRONJ than breast cancer patients with adjuvant bisphosphonate treatment at a lower dose. Medical and odontological data was collected between 2015-2020. Assessment of oral health and dental care before and during oncological treatment took place at the specialist clinic for Orofacial medicine linked to the specific hospital. Results: In total, 220 patients were included, 101 patients in the high-dose group and 119 patients in the adjuvant BP-treatment group. MRONJ was diagnosed in 13 patients (14%) in the high-dose group. The mandible was affected in most of the cases (84.6%), with a mean duration of high-dose treatment of 19.7 months. In 46.2% of cases, no dental cause of MRONJ could be identified. Overall, estrogen receptor-positive (ER+) BC was the most representative type in 172 patients (78.2%). However, this was 83.9% in the high-dose cases group. The most used drug was denosumab. Twenty-five patients (26.9%) switched their medication from ZOL to denosumab during their oncological treatment. Patients with ER+ breast cancer were reported in 88 patients (87.8%) in the adjuvant group that was treated with ZOL. Conclusions: MRONJ was diagnosed only in the high-dose AR group. Dental assessment and care of patients in the adjuvant group should be considered, with a recommendation to potentially prolong ZOL treatment from 3 to 5 years, with concomitant use of hormonal therapy in patients diagnosed with ER+ breast cancer to prevent bone loss induced by oncological treatment. A new referral for dental assessment is very important in the case of bone metastases when treatment with high dose AR will be required since it is associated with a higher risk of MRONJ.

Keywords: antiresorptive therapy, breast cancer, dental care, MRONJ

Procedia PDF Downloads 87
1038 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 110
1037 The Descending Genicular Artery Perforator Free Flap as a Reliable Flap: Literature Review

Authors: Doran C. Kalmin

Abstract:

The descending genicular artery (DGA) perforator free flap provides an alternative to free flap reconstruction based on a review of the literature detailing both anatomical and clinical studies. The descending genicular artery (DGA) supplies skin, muscle, tendon, and bone located around the medial aspect of the knee that has been used in several pioneering reports in reconstructing defects located in various areas throughout the body. After the success of the medial femoral condyle flap in early studies, a small number of studies have been published detailing the use of the DGA in free flap reconstruction. Despite early success in the use of the DGA flap, acceptance within the Plastic and Reconstructive Surgical community has been limited due primarily to anatomical variations of the pedicle. This literature review is aimed at detailing the progression of the DGA perforator free flap and its variations as an alternative and reliable free flap for reconstruction of composite defects with an exploration into both anatomical and clinical studies. A literature review was undertaken, and the progression of the DGA flap is explored from the early review by Acland et al. pioneering the saphenous free flap to exploring modern changes and studies of the anatomy of the DGA. An extensive review of the literature was undertaken that details the anatomy and its variations, approaches to harvesting the flap, the advantages, and disadvantages of the DGA perforator free flap as well as flap outcomes. There are 15 published clinical series of DGA perforator free flaps that incorporate cutaneous, osteoperiosteal, cartilage, osteocutaneous, osteoperiosteal and muscle, osteoperiosteal and subcutaneous and tendocutatenous. The commonest indication for using a DGA free flap was for non-union of bone, particularly that of the scaphoid whereby the medial femoral condyle could be used. In the case series, a success rate of over 90% was established, showing that these early studies have had good success with a wide range of tissue transfers. The greatest limitation is the anatomical variation of the DGA and therefore, the challenges associated with raising the flap. Despite the variation in anatomy and around 10-15% absence of the DGA, the saphenous artery can be used as well as the superior medial genicular artery if the vascular bone is required as part of the flap. Despite only a handful of anatomical and clinical studies describing the DGA perforator free flap, it ultimately provides a reliable flap that can include a variety of composite structure used for reconstruction in almost any area throughout the body. Although it has limitations, it provides a reliable option for free flap reconstruction that can routinely be performed as a single-stage procedure.

Keywords: anatomical study, clinical study, descending genicular artery, literature review, perforator free flap reconstruction

Procedia PDF Downloads 144
1036 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process

Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava

Abstract:

Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.

Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties

Procedia PDF Downloads 128
1035 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 270
1034 Global News Coverage of the Pandemic: Towards an Ethical Framework for Media Professionalism

Authors: Anantha S. Babbili

Abstract:

This paper analyzes the current media practices dominant in global journalistic practices within the framework of world press theories of Libertarian, Authoritarian, Communist, and Social Responsibility to evaluate their efficacy in addressing their role in the coverage of the coronavirus, also known as COVID-19. The global media flows, determinants of news coverage, and international awareness and the Western view of the world will be critically analyzed within the context of the prevalent news values that underpin free press and media coverage of the world. While evaluating the global discourse paramount to a sustained and dispassionate understanding of world events, this paper proposes an ethical framework that brings clarity devoid of sensationalism, partisanship, right-wing and left-wing interpretations to a breaking and dangerous development of a pandemic. As the world struggles to contain the coronavirus pandemic with death climbing close to 6,000 from late January to mid-March, 2020, the populations of the developed as well as the developing nations are beset with news media renditions of the crisis that are contradictory, confusing and evoking anxiety, fear and hysteria. How are we to understand differing news standards and news values? What lessons do we as journalism and mass media educators, researchers, and academics learn in order to construct a better news model and structure of media practice that addresses science, health, and media literacy among media practitioners, journalists, and news consumers? As traditional media struggles to cover the pandemic to its audience and consumers, social media from which an increasing number of consumers get their news have exerted their influence both in a positive way and in a negative manner. Even as the world struggles to grasp the full significance of the pandemic, the World Health Organization (WHO) has been feverishly battling an additional challenge related to the pandemic in what it termed an 'infodemic'—'an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.' There is, indeed, a need for journalism and news coverage in times of pandemics that reflect social responsibility and ethos of public service journalism. Social media and high-tech information corporations, collectively termed GAMAF—Google, Apple, Microsoft, Amazon, and Facebook – can team up with reliable traditional media—newspapers, magazines, book publishers, radio and television corporates—to ease public emotions and be helpful in times of a pandemic outbreak. GAMAF can, conceivably, weed out sensational and non-credible sources of coronavirus information, exotic cures offered for sale on a quick fix, and demonetize videos that exploit peoples’ vulnerabilities at the lowest ebb. Credible news of utility delivered in a sustained, calm, and reliable manner serves people in a meaningful and helpful way. The world’s consumers of news and information, indeed, deserve a healthy and trustworthy news media – at least in the time of pandemic COVID-19. Towards this end, the paper will propose a practical model for news media and journalistic coverage during times of a pandemic.

Keywords: COVID-19, international news flow, social media, social responsibility

Procedia PDF Downloads 112
1033 A Community Solution to Address Extensive Nitrate Contamination in the Lower Yakima Valley Aquifer

Authors: Melanie Redding

Abstract:

Historic widespread nitrate contamination of the Lower Yakima Valley aquifer in Washington State initiated a community-based effort to reduce nitrate concentrations to below-drinking water standards. This group commissioned studies on characterizing local nitrogen sources, deep soil assessments, drinking water, and assessing nitrate concentrations at the water table. Nitrate is the most prevalent groundwater contaminant with common sources from animal and human waste, fertilizers, plants and precipitation. It is challenging to address groundwater contamination when common sources, such as agriculture, on-site sewage systems, and animal production, are widespread. Remediation is not possible, so mitigation is essential. The Lower Yakima Valley is located over 175,000 acres, with a population of 56,000 residents. Approximately 25% of the population do not have access to safe, clean drinking water, and 20% of the population is at or below the poverty level. Agriculture is the primary economic land-use activity. Irrigated agriculture and livestock production make up the largest percentage of acreage and nitrogen load. Commodities include apples, grapes, hops, dairy, silage corn, triticale, alfalfa and cherries. These commodities are important to the economic viability of the residents of the Lower Yakima Valley, as well as Washington State. Mitigation of nitrate in groundwater is challenging. The goal is to ensure everyone has safe drinking water. There are no easy remedies due to the extensive and pervasiveness of the contamination. Monitoring at the water table indicates that 45% of the 30 spatially distributed monitoring wells exceeded the drinking water standard. This indicates that there are multiple sources that are impacting water quality. Washington State has several areas which have extensive groundwater nitrate contamination. The groundwater in these areas continues to degrade over time. However, the Lower Yakima Valley is being successful in addressing this health issue because of the following reasons: the community is engaged and committed; there is one common goal; there has been extensive public education and outreach to citizens; and generating credible data using sound scientific methods. Work in this area is continuing as an ambient groundwater monitoring network is established to assess the condition of the aquifer over time. Nitrate samples are being collected from 170 wells, spatially distributed across the aquifer. This research entails quarterly sampling for two years to characterize seasonal variability and then continue annually afterward. This assessment will provide the data to statistically determine trends in nitrate concentrations across the aquifer, over time. Thirty-three of these wells are monitoring wells that are screened across the aquifer. The water quality from these wells are indicative of activities at the land surface. Additional work is being conducted to identify land use management practices that are effective in limiting nitrate migration through the soil column. Tracking nitrate in the soil column every season is an important component of bridging land-use practices with the fate and transport of nitrate through the subsurface. Patience, tenacity, and the ability to think outside the box are essential for dealing with widespread nitrate contamination of groundwater.

Keywords: community, groundwater, monitoring, nitrate

Procedia PDF Downloads 177
1032 Serum Concentration of the CCL7 Chemokine in Diabetic Pregnant Women during Pregnancy until the Postpartum Period

Authors: Fernanda Piculo, Giovana Vesentini, Gabriela Marini, Debora Cristina Damasceno, Angelica Mercia Pascon Barbosa, Marilza Vieira Cunha Rudge

Abstract:

Introduction: Women with previous gestational diabetes mellitus (GDM) were significantly more likely to have urinary incontinence (UI) and pelvic floor muscle dysfunction compared to non-diabetic women two years after a cesarean section. Additional results demonstrated that induced diabetes causes detrimental effects on pregnant rat urethral muscle. These results indicate the need for exploration of the mechanistic role of a recovery factor in female UI. Chemokine ligand 7 (CCL7) was significantly over expressed in rat serum, urethral and vaginal tissues immediately following induction of stress UI in a rat model simulating birth trauma. CCL7 over expression has shown potency for stimulating targeted stem cell migration and provide a translational link (clinical measurement) which further provide opportunities for treatment. The aim of this study was to investigate the CCL7 levels profile in diabetic pregnant women with urinary incontinence during pregnancy over the first year postpartum. Methods: This study was conducted in the Perinatal Diabetes Research Center of the Botucatu Medical School/UNESP, and was approved by the Research Ethics Committee of the Institution (CAAE: 20639813.0.0000.5411). The diagnosis of GDM was established between 24th and 28th gestational weeks, by the 75 g-OGTT test according to ADA’s criteria. Urinary incontinence was defined according to the International Continence Society and the CCL7 levels was measured by ELISA (R&D Systems, Catalog Number DCC700). Two hundred twelve women were classified into four study groups: normoglycemic continent (NC), normoglycemic incontinent (NI), diabetic continent (DC) and diabetic incontinent (DI). They were evaluated at six-time-points: 12-18, 24-28 and 34-38 gestational weeks, 24-48 hours, 6 weeks and 6-12 months postpartum. Results: At 12-18 weeks, it was possible to consider only two groups, continent and incontinent, because at this early gestational period has not yet been the diagnosis of GDM. The group with GDM and UI (DI group) showed lower levels of CCL7 in all time points during pregnancy and postpartum, compared to normoglycemic groups (NC and NI), indicating that these women have not recovered from child birth induced UI during the 6-12 months postpartum compared to their controls, and that the progression of UI and/or lack of recovery throughout the first postpartum year can be related with lower levels of CCL7. Instead, serum CCL7 was significantly increased in the NC group. Taken together, these findings of overexpression of CCL7 in the NC group and decreased levels in the DI group, could confirm that diabetes delays the recovery from child birth induced UI, and that CCL7 could potentially be used as a serum marker of injury. Conclusion: This study demonstrates lower levels of CCL7 in the DI group during pregnancy and postpartum and suggests that the progression of UI in diabetic women and/or lack of recovery throughout the first postpartum year can be related with low levels of CCL7. This provides a translational potential where CCL7 measurement could be used as a surrogate for injury after delivery. Successful controlled CCL7 mediated stem cell homing to the lower urinary tract could one day introduce the potential for non-operative treatment or prevention of stress urinary incontinence.

Keywords: CCL7, gestational diabetes, pregnancy, urinary incontinence

Procedia PDF Downloads 337
1031 Assessment of Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town Who Were Enrolled From 2011 to 2021

Authors: Getahun Nigusie Demise

Abstract:

Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: The aim of this study is to assess the incidence and predictors of mortality among HIV positive children on antiretroviral therapy (ART) in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.

Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, treatment, Ethiopia

Procedia PDF Downloads 49
1030 CO₂ Capture by Membrane Applied to Steel Production Process

Authors: Alexandra-Veronica Luca, Letitia Petrescu

Abstract:

Steel production is a major contributor to global warming potential. An average value of 1.83 tons of CO₂ is emitted for every ton of steel produced, resulting in over 3.3 Mt of CO₂ emissions each year. The present paper is focused on the investigation and comparison of two O₂ separation methods and two CO₂ capture technologies applicable to iron and steel industry. The O₂ used in steel production comes from an Air Separation Unit (ASU) using distillation or from air separation using membranes. The CO₂ capture technologies are represented by a two-stage membrane separation process and the gas-liquid absorption using methyl di-ethanol amine (MDEA). Process modelling and simulation tools, as well as environmental tools, are used in the present study. The production capacity of the steel mill is 4,000,000 tones/year. In order to compare the two CO₂ capture technologies in terms of efficiency, performance, and sustainability, the following cases have been investigated: Case 1: steel production using O₂ from ASU and no CO₂ capture; Case 2: steel production using O₂ from ASU and gas-liquid absorption for CO₂ capture; Case 3: steel production using O₂ from ASU and membranes for CO₂ capture; Case 4: steel production using O₂ from membrane separation method and gas-liquid absorption for CO₂ capture and Case-5: steel production using membranes for air separation and CO₂ capture. The O₂ separation rate obtained in the distillation technology was about 96%, and about 33% in the membrane technology. Similarly, the O₂ purity resulting in the conventional process (i.e. distillation) is higher compared to the O₂ purity obtained in the membrane unit (e.g., 99.50% vs. 73.66%). The air flow-rate required for membrane separation is about three times higher compared to the air flow-rate for cryogenic distillation (e.g., 549,096.93 kg/h vs. 189,743.82 kg/h). A CO₂ capture rate of 93.97% was obtained in the membrane case, while the CO₂ capture rate for the gas-liquid absorption was 89.97%. A quantity of 6,626.49 kg/h CO₂ with a purity of 95.45% is separated from the total 23,352.83 kg/h flue-gas in the membrane process, while with absorption of 6,173.94 kg/h CO₂ with a purity of 98.79% is obtained from 21,902.04 kg/h flue-gas and 156,041.80 kg/h MDEA is recycled. The simulation results, performed using ChemCAD process simulator software, lead to the conclusion that membrane-based technology can be a suitable alternative for CO₂ removal for steel production. An environmental evaluation using Life Cycle Assessment (LCA) methodology was also performed. Considering the electricity consumption, the performance, and environmental indicators, Case 3 can be considered the most effective. The environmental evaluation, performed using GaBi software, shows that membrane technology can lead to lower environmental emissions if membrane production is based on benzene derived from toluene hydrodealkilation and chlorine and sodium hydroxide are produced using mixed technologies.

Keywords: CO₂ capture, gas-liquid absorption, Life Cycle Assessment, membrane separation, steel production

Procedia PDF Downloads 291
1029 Machine Translation Analysis of Chinese Dish Names

Authors: Xinyu Zhang, Olga Torres-Hostench

Abstract:

This article presents a comparative study evaluating and comparing the quality of machine translation (MT) output of Chinese gastronomy nomenclature. Chinese gastronomic culture is experiencing an increased international acknowledgment nowadays. The nomenclature of Chinese gastronomy not only reflects a specific aspect of culture, but it is related to other areas of society such as philosophy, traditional medicine, etc. Chinese dish names are composed of several types of cultural references, such as ingredients, colors, flavors, culinary techniques, cooking utensils, toponyms, anthroponyms, metaphors, historical tales, among others. These cultural references act as one of the biggest difficulties in translation, in which the use of translation techniques is usually required. Regarding the lack of Chinese food-related translation studies, especially in Chinese-Spanish translation, and the current massive use of MT, the quality of the MT output of Chinese dish names is questioned. Fifty Chinese dish names with different types of cultural components were selected in order to complete this study. First, all of these dish names were translated by three different MT tools (Google Translate, Baidu Translate and Bing Translator). Second, a questionnaire was designed and completed by 12 Chinese online users (Chinese graduates of a Hispanic Philology major) in order to find out user preferences regarding the collected MT output. Finally, human translation techniques were observed and analyzed to identify what translation techniques would be observed more often in the preferred MT proposals. The result reveals that the MT output of the Chinese gastronomy nomenclature is not of high quality. It would be recommended not to trust the MT in occasions like restaurant menus, TV culinary shows, etc. However, the MT output could be used as an aid for tourists to have a general idea of a dish (the main ingredients, for example). Literal translation turned out to be the most observed technique, followed by borrowing, generalization and adaptation, while amplification, particularization and transposition were infrequently observed. Possibly because that the MT engines at present are limited to relate equivalent terms and offer literal translations without taking into account the whole context meaning of the dish name, which is essential to the application of those less observed techniques. This could give insight into the post-editing of the Chinese dish name translation. By observing and analyzing translation techniques in the proposals of the machine translators, the post-editors could better decide which techniques to apply in each case so as to correct mistakes and improve the quality of the translation.

Keywords: Chinese dish names, cultural references, machine translation, translation techniques

Procedia PDF Downloads 137
1028 Causes, Consequences, and Alternative Strategies of Illegal Migration in Ethiopia: The Case of Tigray Region

Authors: Muuz Abraha Meshesha

Abstract:

Illegal Migration, specifically Trafficking in person is one of the primary issues of the day affecting all states of the world with variation on the extent of the root causes and consequences that led people to migrate irregularly and the consequences it is costing on humanity. This paper intends to investigate the root causes and consequences of illegal migration in Ethiopia’s Tigray Regional state and come up with alternative intervening strategy. To come up with pertinent and robust research finding, this study employed mixed research approach involving qualitative and quantitative data in line with purposive and snow ball sampling selection technique. The study revealed that, though poverty is the most commonly sensed pushing factor for people to illegally migrate, the issue of psycho-social orientation and attitudinal immersion of the local community for illegal migration, both in thinking and action is the most pressing problem that urges serious intervention. Trafficking in persons and Illegal migration in general, is becoming the norm of the day in the study area that overtly reveal illegal migration is an issue beyond livelihood securing demand in practice. Basically, parties engaged in illegal migration and the accomplice with human traffickers these days in the study area are found to be more than urgency for food security and a need to escape from livelihood impoverishment. Therefore, this study come up with a new paradigm insight indicating that illegal migration is believed by the local community members as an optional path way of doing business in illegal way while the attitude of the community and officials authorized to regulate is being part of the channel or to the least tolerant of this grave global danger. The study also found that the effect of illegal migration is significantly manifested in long run than in short term periods. Therefore, a need for critical consideration on attitudinal based intervention and youth oriented and enforceable legal and policy framework accountability framework is required to face and control illegal migration by international, national, local stakeholders. Besides this, economy based development interventions that could engage and reorient the youth, as primary victims of trafficking, and expansion of large scale projects that can employ large number of youths at a time.

Keywords: human traficking, illegal migration, migration, tigray region

Procedia PDF Downloads 66
1027 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions

Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani

Abstract:

Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.

Keywords: curing, durability, maturity, strength

Procedia PDF Downloads 304
1026 Inverse Problem Method for Microwave Intrabody Medical Imaging

Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara

Abstract:

Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.

Keywords: FDTD, time-reversed, medical imaging, microwave imaging

Procedia PDF Downloads 127