Search results for: mechanical efficiency
534 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 112533 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 111532 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 158531 Hydroxyapatite Based Porous Scaffold for Tooth Tissue Engineering
Authors: Pakize Neslihan Taslı, Alev Cumbul, Gul Merve Yalcın, Fikrettin Sahin
Abstract:
A key experimental trial in the regeneration of large oral and craniofacial defects is the neogenesis of osseous and ligamentous interfacial structures. Currently, oral regenerative medicine strategies are unpredictable for repair of tooth supporting tissues destroyed as a consequence of trauma, chronic infection or surgical resection. A different approach combining the gel-casting method with Hydroxy Apatite HA-based scaffold and different cell lineages as a hybrid system leads to successively mimic the early stage of tooth development, in vitro. HA is widely accepted as a bioactive material for guided bone and tooth regeneration. In this study, it was reported that, HA porous scaffold preparation, characterization and evaluation of structural and chemical properties. HA is the main factor that exists in tooth and it is in harmony with structural, biological, and mechanical characteristics. Here, this study shows mimicking immature tooth at the late bell stage design and construction of HA scaffolds for cell transplantation of human Adipose Stem Cells (hASCs), human Bone Marrow Stem Cells (hBMSCs) and Gingival Epitelial cells for the formation of human tooth dentin-pulp-enamel complexes in vitro. Scaffold characterization was demonstrated by SEM, FTIR and pore size and density measurements. The biological contraction of dental tissues against each other was demonstrated by mRNA gene expressions, histopatologic observations and protein release profile by ELISA tecnique. The tooth shaped constructs with a pore size ranging from 150 to 300 µm arranged by gathering right amounts of materials provide interconnected macro-porous structure. The newly formed tissue like structures that grow and integrate within the HA designed constructs forming tooth cementum like tissue, pulp and bone structures. These findings are important as they emphasize the potential biological effect of the hybrid scaffold system. In conclusion, this in vitro study clearly demonstrates that designed 3D scaffolds shaped as a immature tooth at the late bell stage were essential to form enamel-dentin-pulp interfaces with an appropriate cell and biodegradable material combination. The biomimetic architecture achieved here is providing a promising platform for dental tissue engineering.Keywords: tooth regeneration, tissue engineering, adipose stem cells, hydroxyapatite tooth engineering, porous scaffold
Procedia PDF Downloads 231530 How Can Food Retailing Benefit from Neuromarketing Research: The Influence of Traditional and Innovative Tools of In-Store Communication on Consumer Reactions
Authors: Jakub Berčík, Elena Horská, Ľudmila Nagyová
Abstract:
Nowadays, the point of sale remains one of the few channels of communication which is not oversaturated yet and has great potential for the future. The fact that purchasing decisions are significantly affected by emotions, while up to 75 % of them are implemented at the point of sale, only demonstrates its importance. The share of impulsive purchases is about 60-75 %, depending on the particular product category. Nevertheless, habits predetermine the content of the shopping cart above all and hence in this regard the role of in-store communication is to disrupt the routine and compel the customer to try something new. This is the reason why it is essential to know how to work with this relatively young branch of marketing communication as efficiently as possible. New global trend in this discipline is evaluating the effectiveness of particular tools in the in-store communication. To increase the efficiency it is necessary to become familiar with the factors affecting the customer both consciously and unconsciously, and that is a task for neuromarketing and sensory marketing. It is generally known that the customer remembers the negative experience much longer and more intensely than the positive ones, therefore it is essential for marketers to avoid this negative experience. The final effect of POP (Point of Purchase) or POS (Point of Sale) tools is conditional not only on their quality and design, but also on the location at the point of sale which contributes to the overall positive atmosphere in the store. Therefore, in-store advertising is increasingly in the center of attention and companies are willing to spend even a third of their marketing communication budget on it. The paper deals with a comprehensive, interdisciplinary research of the impact of traditional as well as innovative tools of in-store communication on the attention and emotional state (valence and arousal) of consumers on the food market. The research integrates measurements with eye camera (Eye tracker) and electroencephalograph (EEG) in real grocery stores as well as in laboratory conditions with the purpose of recognizing attention and emotional response among respondents under the influence of selected tools of in-store communication. The object of the research includes traditional (e.g. wobblers, stoppers, floor graphics) and innovative (e.g. displays, wobblers with LED elements, interactive floor graphics) tools of in-store communication in the fresh unpackaged food segment. By using a mobile 16-channel electroencephalograph (EEG equipment) from the company EPOC, a mobile eye camera (Eye tracker) from the company Tobii and a stationary eye camera (Eye tracker) from the company Gazepoint, we observe the attention and emotional state (valence and arousal) to reveal true consumer preferences using traditional and new unusual communication tools at the point of sale of the selected foodstuffs. The paper concludes with suggesting possibilities for rational, effective and energy-efficient combination of in-store communication tools, by which the retailer can accomplish not only captivating and attractive presentation of displayed goods, but ultimately also an increase in retail sales of the store.Keywords: electroencephalograph (EEG), emotion, eye tracker, in-store communication
Procedia PDF Downloads 387529 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.Keywords: anammox, filter media, kinetics, nitrogen removal
Procedia PDF Downloads 380528 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 136527 Structural Analysis of a Composite Wind Turbine Blade
Abstract:
The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing
Procedia PDF Downloads 423526 Effects of Environmental and Genetic Factors on Growth Performance, Fertility Traits and Milk Yield/Composition in Saanen Goats
Authors: Deniz Dincel, Sena Ardicli, Hale Samli, Mustafa Ogan, Faruk Balci
Abstract:
The aim of the study was to determine the effects of some environmental and genetic factors on growth, fertility traits, milk yield and composition in Saanen goats. For this purpose, the total of 173 Saanen goats and kids were investigated for growth, fertility and milk traits in Marmara Region of Turkey. Fertility parameters (n=70) were evaluated during two years. Milk samples were collected during the lactation and the milk yield/components (n=59) of each goat were calculated. In terms of CSN3 and AGPAT6 gene; the genotypes were defined by PCR-RFLP. Saanen kids (n=86-112) were measured from birth to 6 months of life. The birth, weaning, 60ᵗʰ, 90ᵗʰ, 120ᵗʰ and 180tᵗʰ days of average live weights were calculated. The effects of maternal age on pregnancy rate (p < 0.05), birth rate (p < 0.05), infertility rate (p < 0.05), single born kidding (p < 0.001), twinning rate (p < 0.05), triplet rate (p < 0.05), survival rate of kids until weaning (p < 0.05), number of kids per parturition (p < 0.01) and number of kids per mating (p < 0.01) were found significant. The impacts of year on birth rate (p < 0.05), abortion rate (p < 0.001), single born kidding (p < 0.01), survival rate of kids until weaning (p < 0.01), number of kids per mating (p < 0.01) were found significant for fertility traits. The impacts of lactation length on all milk yield parameters (lactation milk, protein, fat, totally solid, solid not fat, casein and lactose yield) (p < 0.001) were found significant. The effects of age on all milk yield parameters (lactation milk, protein, fat, total solid, solid not fat, casein and lactose yield) (p < 0.001), protein rate (p < 0.05), fat rate (p < 0.05), total solid rate (p < 0.01), solid not fat rate (p < 0.05), casein rate (p < 0.05) and lactation length (p < 0.01), were found significant too. However, the effect of AGPAT6 gene on milk yield and composition was not found significant in Saanen goats. The herd was found monomorphic (FF) for CSN3 gene. The effects of sex on live weights until 90ᵗʰ days of life (birth, weaning and 60ᵗʰ day of average weight) were found significant statistically (p < 0.001). The maternal age affected only birth weight (p < 0,001). The effects month at birth on all of the investigated day [the birth, 120ᵗʰ, 180ᵗʰ days (p < 0.05); the weaning, 60ᵗʰ, 90ᵗʰ days (p < 0,001)] were found significant. The birth type was found significant on the birth (p < 0,001), weaning (p < 0,01), 60ᵗʰ (p < 0,01) and 90ᵗʰ (p < 0,01) days of average live weights. As a result, screening the other regions of CSN3, AGPAT6 gene and also investigation the phenotypic association of them should be useful to clarify the efficiency of target genes. Environmental factors such as maternal age, year, sex and birth type were found significant on some growth, fertility and milk traits in Saanen goats. So consideration of these factors could be used as selection criteria in dairy goat breeding.Keywords: fertility, growth, milk yield, Saanen goats
Procedia PDF Downloads 165525 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 327524 Wood as a Climate Buffer in a Supermarket
Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø
Abstract:
Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast
Procedia PDF Downloads 213523 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 74522 Analyzing Global User Sentiments on Laptop Features: A Comparative Study of Preferences Across Economic Contexts
Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari
Abstract:
The widespread adoption of laptops has become essential to modern lifestyles, supporting work, education, and entertainment. Social media platforms have emerged as key spaces where users share real-time feedback on laptop performance, providing a valuable source of data for understanding consumer preferences. This study leverages aspect-based sentiment analysis (ABSA) on 1.5 million tweets to examine how users from developed and developing countries perceive and prioritize 16 key laptop features. The analysis reveals that consumers in developing countries express higher satisfaction overall, emphasizing affordability, durability, and reliability. Conversely, users in developed countries demonstrate more critical attitudes, especially toward performance-related aspects such as cooling systems, battery life, and chargers. The study employs a mixed-methods approach, combining ABSA using the PyABSA framework with expert insights gathered through a Delphi panel of ten industry professionals. Data preprocessing included cleaning, filtering, and aspect extraction from tweets. Universal issues such as battery efficiency and fan performance were identified, reflecting shared challenges across markets. However, priorities diverge between regions, while users in developed countries demand high-performance models with advanced features, those in developing countries seek products that offer strong value for money and long-term durability. The findings suggest that laptop manufacturers should adopt a market-specific strategy by developing differentiated product lines. For developed markets, the focus should be on cutting-edge technologies, enhanced cooling solutions, and comprehensive warranty services. In developing markets, emphasis should be placed on affordability, versatile port options, and robust designs. Additionally, the study highlights the importance of universal charging solutions and continuous sentiment monitoring to adapt to evolving consumer needs. This research offers practical insights for manufacturers seeking to optimize product development and marketing strategies for global markets, ensuring enhanced user satisfaction and long-term competitiveness. Future studies could explore multi-source data integration and conduct longitudinal analyses to capture changing trends over time.Keywords: consumer behavior, durability, laptop industry, sentiment analysis, social media analytics
Procedia PDF Downloads 13521 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 142520 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis
Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh
Abstract:
Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literatureKeywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries
Procedia PDF Downloads 403519 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 184518 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78517 Fatigue Influence on the Residual Stress State in Shot Peened Duplex Stainless Steel
Authors: P. D. Pedrosa, J. M. A. Rebello, M. P. Cindra Fonseca
Abstract:
Duplex stainless steels (DSS) exhibit a biphasic microstructure consisting of austenite and delta ferrite. Their high resistance to oxidation, and corrosion, even in H2S containing environments, allied to low cost when compared to conventional stainless steel, are some properties which make this material very attractive for several industrial applications. However, several of these industrial applications imposes cyclic loading to the equipments and in consequence fatigue damage needs to be a concern. A well-known way of improving the fatigue life of a component is by introducing compressive residual stress in its surface. Shot peening is an industrial working process which brings the material directly beneath component surface in a high mechanical compressive state, so inhibiting fatigue crack initiation. However, one must take into account the fact that the cyclic loading itself can reduce and even suppress these residual stresses, thus having undesirable consequences in the process of improving fatigue life by the introduction of compressive residual stresses. In the present work, shot peening was used to introduce residual stresses in several DSS samples. These were thereafter submitted to three different fatigue regimes: low, medium and high cycle fatigue. The evolution of the residual stress during loading were then examined on both surface and subsurface of the samples. It was used the DSS UNS S31803, with microstructure composed of 49% austenite and 51% ferrite. The treatment of shot peening was accomplished by the application of blasting in two Almen intensities of 0.25 and 0.39A. The residual stresses were measured by X-ray diffraction using the double exposure method and a portable equipment with CrK radiation and the (211) diffracting plane for the austenite phase and the (220) plane for the ferrite phase. It is known that residual stresses may arise when two regions of the same material experienced different degrees of plastic deformation. When these regions are separated in respect to each other on a scale that is large compared to the material's microstructure they are called macro stresses. In contrast, microstresses can largely vary over distances which are small comparable to the scale of the material's microstructure and must balance zero between the phases present. In the present work, special attention will be paid to the measurement of residual microstresses. Residual stress measurements were carried out in test pieces submitted to low, medium and high-cycle fatigue, in both longitudinal and transverse direction of the test pieces. It was found that after shot peening, the residual microstress is tensile in the austenite and compressive in the ferrite phases. It was hypothesized that the hardening behavior of the austenite after shot peening was probably due to its higher nitrogen content. Fatigue cycling can effectively change this stress state but this effect was found to be dependent of the shot peening intensity was well as the fatigue range.Keywords: residual stresses, fatigue, duplex steel, shot peening
Procedia PDF Downloads 228516 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 487515 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 138514 To Corelate Thyroid Dysfunction in Pregnancy with Preterm Labor
Authors: Pushp Lata Sankhwar
Abstract:
INTRODUCTION: Maternal Hypothyroidism is the most frequent endocrine disorder in pregnancy and varies from 2.5% in the west to 11.0% in India. Maternal Hypothyroidism can have detrimental maternal effects like increased risk of preterm labor, PPROM leading to increased maternal morbidity and also on the neonate in the form of prematurity and its complications, prolonged hospital stay, neurological developmental problems, delayed milestones and mental retardation etc. Henceforth, the study was planned to evaluate the role of Hypothyroidism in preterm labor and its effect on neonates. AIMS AND OBJECTIVES: To Correlate Overt Hypothyroidism, Subclinical Hypothyroidism and Isolated Hypothyroxinemia With Preterm Labor and the neonatal outcome. Material and Methods: A case-control study of singleton pregnancy was performed over a year, in which a total of 500 patients presenting in the emergency with preterm labor were enrolled. The thyroid profile of these patients was sent at the time of admission, on the basis of which they were divided into Cases – Hypothyroidic mothers and Controls – Euthyroid mothers. The cases were further divided into subclinical, overt Hypothyroidism and isolated hypothyroxinemia. The neonatal outcome of these groups was also compared on the basis of the incidence and severity of neonatal morbidity, neonatal respiratory distress, the incidence of neonatal Hypothyroidism and early complications. The feto-maternal data was collected and analysed. RESULTS: In the study, a total of 500 antenatal patients with a history of preterm labor were enrolled, out of which 67 (13.8%) patients were found to be hypothyroid. The majority of the mothers had Subclinical Hypothyroidism (12.2%), followed by Overt Hypothyroidism seen in 1% of the mothers and isolated hypothyroxinemia in 0.6% of cases. The neonates of hypothyroid mothers had higher levels of cord blood TSH, and the mean cord blood TSH levels were highest in the case of neonates of mothers with Overt Hypothyroidism. The need for resuscitation of the neonates at the time of birth was higher in the case of neonates of hypothyroid mothers, especially with Subclinical Hypothyroidism. Also, it was found that the requirement of oxygen therapy in the form of oxygen by nasal prongs, oxygen by a hood, CPAP, CPAP along with surfactant therapy and mechanical ventilation along with surfactant therapy was significantly higher in the case of neonates of hypothyroid mothers. CONCLUSION: The results of our study imply that uncontrolled and untreated maternal Hypothyroidism may also lead to preterm delivery. The neonates of mothers with Hypothyroidism have higher cord blood TSH levels. The study also shows that there is an increased incidence and severity of respiratory distress in the neonates of hypothyroid mothers with untreated subclinical Hypothyroidism. Hence, we propose that routine screening for thyroid dysfunction in pregnant women should be done to prevent thyroid-related feto-maternal complications.Keywords: high-risk pregnancy, thyroid, dysfunction, hypothyroidism, Preterm labor
Procedia PDF Downloads 89513 Chemical Technology Approach for Obtaining Carbon Structures Containing Reinforced Ceramic Materials Based on Alumina
Authors: T. Kuchukhidze, N. Jalagonia, T. Archuadze, G. Bokuchava
Abstract:
The growing scientific-technological progress in modern civilization causes actuality of producing construction materials which can successfully work in conditions of high temperature, radiation, pressure, speed, and chemically aggressive environment. Such extreme conditions can withstand very few types of materials and among them, ceramic materials are in the first place. Corundum ceramics is the most useful material for creation of constructive nodes and products of various purposes for its low cost, easy accessibility to raw materials and good combination of physical-chemical properties. However, ceramic composite materials have one disadvantage; they are less plastics and have lower toughness. In order to increase the plasticity, the ceramics are reinforced by various dopants, that reduces the growth of the cracks. It is shown, that adding of even small amount of carbon fibers and carbon nanotubes (CNT) as reinforcing material significantly improves mechanical properties of the products, keeping at the same time advantages of alundum ceramics. Graphene in composite material acts in the same way as inorganic dopants (MgO, ZrO2, SiC and others) and performs the role of aluminum oxide inhibitor, as it creates shell, that gives possibility to reduce sintering temperature and at the same time it acts as damper, because scattering of a shock wave takes place on carbon structures. Application of different structural modification of carbon (graphene, nanotube and others) as reinforced material, gives possibility to create multi-purpose highly requested composite materials based on alundum ceramics. In the present work offers simplified technology for obtaining of aluminum oxide ceramics, reinforced with carbon nanostructures, during which chemical modification with doping carbon nanostructures will be implemented in the process of synthesis of final powdery composite – Alumina. In charge doping carbon nanostructures connected to matrix substance with C-O-Al bonds, that provide their homogeneous spatial distribution. In ceramic obtained as a result of consolidation of such powders carbon fragments equally distributed in the entire matrix of aluminum oxide, that cause increase of bending strength and crack-resistance. The proposed way to prepare the charge simplifies the technological process, decreases energy consumption, synthesis duration and therefore requires less financial expenses. In the implementation of this work, modern instrumental methods were used: electronic and optical microscopy, X-ray structural and granulometric analysis, UV, IR, and Raman spectroscopy.Keywords: ceramic materials, α-Al₂O₃, carbon nanostructures, composites, characterization, hot-pressing
Procedia PDF Downloads 118512 A Case of Myelofibrosis-Related Arthropathy: A Rare and Underrecognized Entity
Authors: Geum Yeon Sim, Jasal Patel, Anand Kumthekar, Stanley Wainapel
Abstract:
A 65-year-old right-hand dominant African-American man, formerly employed as a security guard, was referred to Rehabilitation Medicine with bilateral hand stiffness and weakness. His past medical history was only significant for myelofibrosis, diagnosed 4 years earlier, for which he was receiving scheduled blood transfusions. Approximately 2 years ago, he began to notice stiffness and swelling in his non-dominant hand that progressed to pain and decreased strength, limiting his hand function. Similar but milder symptoms developed in his right hand several months later. There was no history of prior injury or exposure to cold. Physical examination showed enlargement of metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints with finger flexion contractures, Swan-neck and Boutonniere deformities, and associated joint tenderness. Changes were more prominent in the left hand. X-rays showed mild osteoarthritis of several bilateral PIP joints. Anti-nuclear antibodies, rheumatoid factor, and cyclic citrullinated peptide antibodies were negative. MRI of the hand showed no erosions or synovitis. A rheumatology consultation was obtained, and the cause of his symptoms was attributed to myelofibrosis-related arthropathy with secondary osteoarthritis. The patient was tried on diclofenac cream and received a few courses of Occupational Therapy with limited functional improvement. Primary myelofibrosis (PMF) is a rare myeloproliferative neoplasm characterized by clonal proliferation of myeloid cells with variable morphologic maturity and hematopoietic efficiency. Rheumatic manifestations of malignancies include direct invasion, paraneoplastic presentations, secondary gout, or hypertrophic osteoarthropathy. PMF causes gradual bone marrow fibrosis with extramedullary metaplastic hematopoiesis in the liver, spleen, or lymph nodes. Musculoskeletal symptoms are not common and are not well described in the literature. The first reported case of myelofibrosis related arthritis was seronegative arthritis due to synovial invasion of myeloproliferative elements. Myelofibrosis has been associated with autoimmune diseases such as systemic lupus erythematosus, progressive systemic sclerosis, and rheumatoid arthritis. Gout has been reported in patients with myelofibrosis, and the underlying mechanism is thought to be related to the high turnover of nucleic acids that is greatly augmented in this disease. X-ray findings in these patients usually include erosive arthritis with synovitis. Treatment of underlying PMF is the treatment of choice, along with anti-inflammatory medications. Physicians should be cognizant of recognizing this rare entity in patients with PMF while maintaining clinical suspicion for more common causes of joint deformities, such as rheumatic diseases.Keywords: myelofibrosis, arthritis, arthralgia, malignancy
Procedia PDF Downloads 97511 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs
Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana
Abstract:
Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs
Procedia PDF Downloads 322510 Phase Synchronization of Skin Blood Flow Oscillations under Deep Controlled Breathing in Human
Authors: Arina V. Tankanag, Gennady V. Krasnikov, Nikolai K. Chemeris
Abstract:
The development of respiration-dependent oscillations in the peripheral blood flow may occur by at least two mechanisms. The first mechanism is related to the change of venous pressure due to mechanical activity of lungs. This phenomenon is known as ‘respiratory pump’ and is one of the mechanisms of venous return of blood from the peripheral vessels to the heart. The second mechanism is related to the vasomotor reflexes controlled by the respiratory modulation of the activity of centers of the vegetative nervous system. Early high phase synchronization of respiration-dependent blood flow oscillations of left and right forearm skin in healthy volunteers at rest was shown. The aim of the work was to study the effect of deep controlled breathing on the phase synchronization of skin blood flow oscillations. 29 normotensive non-smoking young women (18-25 years old) of the normal constitution without diagnosed pathologies of skin, cardiovascular and respiratory systems participated in the study. For each of the participants six recording sessions were carried out: first, at the spontaneous breathing rate; and the next five, in the regimes of controlled breathing with fixed breathing depth and different rates of enforced breathing regime. The following rates of controlled breathing regime were used: 0.25, 0.16, 0.10, 0.07 and 0.05 Hz. The breathing depth amounted to 40% of the maximal chest excursion. Blood perfusion was registered by laser flowmeter LAKK-02 (LAZMA, Russia) with two identical channels (wavelength 0.63 µm; emission power, 0.5 mW). The first probe was fastened to the palmar surface of the distal phalanx of left forefinger; the second probe was attached to the external surface of the left forearm near the wrist joint. These skin zones were chosen as zones with different dominant mechanisms of vascular tonus regulation. The degree of phase synchronization of the registered signals was estimated from the value of the wavelet phase coherence. The duration of all recording was 5 min. The sampling frequency of the signals was 16 Hz. The increasing of synchronization of the respiratory-dependent skin blood flow oscillations for all controlled breathing regimes was obtained. Since the formation of respiration-dependent oscillations in the peripheral blood flow is mainly caused by the respiratory modulation of system blood pressure, the observed effects are most likely dependent on the breathing depth. It should be noted that with spontaneous breathing depth does not exceed 15% of the maximal chest excursion, while in the present study the breathing depth was 40%. Therefore it has been suggested that the observed significant increase of the phase synchronization of blood flow oscillations in our conditions is primarily due to an increase of breathing depth. This is due to the enhancement of both potential mechanisms of respiratory oscillation generation: venous pressure and sympathetic modulation of vascular tone.Keywords: deep controlled breathing, peripheral blood flow oscillations, phase synchronization, wavelet phase coherence
Procedia PDF Downloads 210509 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly
Authors: Agnieszka Markowska-Radomska, Ewa Dluska
Abstract:
Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion
Procedia PDF Downloads 196508 Targeting Glucocorticoid Receptor Eliminate Dormant Chemoresistant Cancer Stem Cells in Glioblastoma
Authors: Aoxue Yang, Weili Tian, Haikun Liu
Abstract:
Brain tumor stem cells (BTSCs) are resistant to therapy and give rise to recurrent tumors. These rare and elusive cells are likely to disseminate during cancer progression, and some may enter dormancy, remaining viable but not increasing. The identification of dormant BTSCs is thus necessary to design effective therapies for glioblastoma (GBM) patients. Glucocorticoids (GCs) are used to treat GBM-associated edema. However, glucocorticoids participate in the physiological response to psychosocial stress, linked to poor cancer prognosis. This raises concern that glucocorticoids affect the tumor and BTSCs. Identifying markers specifically expressed by brain tumor stem cells (BTSCs) may enable specific therapies that spare their regular tissue-resident counterparts. By ribosome profiling analysis, we have identified that glycerol-3-phosphate dehydrogenase 1 (GPD1) is expressed by dormant BTSCs but not by NSCs. Through different stress-induced experiments in vitro, we found that only dexamethasone (DEXA) can significantly increase the expression of GPD1 in NSCs. Adversely, mifepristone (MIFE) which is classified as glucocorticoid receptors antagonists, could decrease GPD1 protein level and weaken the proliferation and stemness in BTSCs. Furthermore, DEXA can induce GPD1 expression in tumor-bearing mice brains and shorten animal survival, whereas MIFE has a distinct adverse effect that prolonged mice lifespan. Knocking out GR in NSC can block the upregulation of GPD1 inducing by DEXA, and we find the specific sequences on GPD1 promotor combined with GR, thus improving the efficiency of GPD1 transcription from CHIP-Seq. Moreover, GR and GPD1 are highly co-stained on GBM sections obtained from patients and mice. All these findings confirmed that GR could regulate GPD1 and loss of GPD1 Impairs Multiple Pathways Important for BTSCs Maintenance GPD1 is also a critical enzyme regulating glycolysis and lipid synthesis. We observed that DEXA and MIFE could change the metabolic profiles of BTSCs by regulating GPD1 to shift the transition of cell dormancy. Our transcriptome and lipidomics analysis demonstrated that cell cycle signaling and phosphoglycerides synthesis pathways contributed a lot to the inhibition of GPD1 caused by MIFE. In conclusion, our findings raise concern that treatment of GBM with GCs may compromise the efficacy of chemotherapy and contribute to BTSC dormancy. Inhibition of GR can dramatically reduce GPD1 and extend the survival duration of GBM-bearing mice. The molecular link between GPD1 and GR may give us an attractive therapeutic target for glioblastoma.Keywords: cancer stem cell, dormancy, glioblastoma, glycerol-3-phosphate dehydrogenase 1, glucocorticoid receptor, dexamethasone, RNA-sequencing, phosphoglycerides
Procedia PDF Downloads 131507 Signal Transduction in a Myenteric Ganglion
Authors: I. M. Salama, R. N. Miftahof
Abstract:
A functional element of the myenteric nervous plexus is a morphologically distinct ganglion. Composed of sensory, inter- and motor neurons and arranged via synapses in neuronal circuits, their task is to decipher and integrate spike coded information within the plexus into regulatory output signals. The stability of signal processing in response to a wide range of internal/external perturbations depends on the plasticity of individual neurons. Any aberrations in this inherent property may lead to instability with the development of a dynamics chaos and can be manifested as pathological conditions, such as intestinal dysrhythmia, irritable bowel syndrome. The aim of this study is to investigate patterns of signal transduction within a two-neuronal chain - a ganglion - under normal physiological and structurally altered states. The ganglion contains the primary sensory (AH-type) and motor (S-type) neurons linked through a cholinergic dendro somatic synapse. The neurons have distinguished electrophysiological characteristics including levels of the resting and threshold membrane potentials and spiking activity. These are results of ionic channel dynamics namely: Na+, K+, Ca++- activated K+, Ca++ and Cl-. Mechanical stretches of various intensities and frequencies are applied at the receptive field of the AH-neuron generate a cascade of electrochemical events along the chain. At low frequencies, ν < 0.3 Hz, neurons demonstrate strong connectivity and coherent firing. The AH-neuron shows phasic bursting with spike frequency adaptation while the S-neuron responds with tonic bursts. At high frequency, ν > 0.5 Hz, the pattern of electrical activity changes to rebound and mixed mode bursting, respectively, indicating ganglionic loss of plasticity and adaptability. A simultaneous increase in neuronal conductivity for Na+, K+ and Ca++ ions results in tonic mixed spiking of the sensory neuron and class 2 excitability of the motor neuron. Although the signal transduction along the chain remains stable the synchrony in firing pattern is not maintained and the number of discharges of the S-type neuron is significantly reduced. A concomitant increase in Ca++- activated K+ and a decrease in K+ in conductivities re-establishes weak connectivity between the two neurons and converts their firing pattern to a bistable mode. It is thus demonstrated that neuronal plasticity and adaptability have a stabilizing effect on the dynamics of signal processing in the ganglion. Functional modulations of neuronal ion channel permeability, achieved in vivo and in vitro pharmacologically, can improve connectivity between neurons. These findings are consistent with experimental electrophysiological recordings from myenteric ganglia in intestinal dysrhythmia and suggest possible pathophysiological mechanisms.Keywords: neuronal chain, signal transduction, plasticity, stability
Procedia PDF Downloads 390506 Changing from Crude (Rudimentary) to Modern Method of Cassava Processing in the Ngwo Village of Njikwa Sub Division of North West Region of Cameroon
Authors: Loveline Ambo Angwah
Abstract:
The processing of cassava from tubers or roots into food using crude and rudimentary method (hand peeling, grating, frying and to sun drying) is a very cumbersome and difficult process. The crude methods are time consuming and labour intensive. While on the other hand, modern processing method, that is using machines to perform the various processes as washing, peeling, grinding, oven drying, fermentation and frying is easier, less time consuming, and less labour intensive. Rudimentarily, cassava roots are processed into numerous products and utilized in various ways according to local customs and preferences. For the people of Ngwo village, cassava is transformed locally into flour or powder form called ‘cumcum’. It is also sucked into water to give a kind of food call ‘water fufu’ and fried to give ‘garri’. The leaves are consumed as vegetables. Added to these, its relative high yields; ability to stay underground after maturity for long periods give cassava considerable advantage as a commodity that is being used by poor rural folks in the community, to fight poverty. It plays a major role in efforts to alleviate the food crisis because of its efficient production of food energy, year-round availability, tolerance to extreme stress conditions, and suitability to present farming and food systems in Africa. Improvement of cassava processing and utilization techniques would greatly increase labor efficiency, incomes, and living standards of cassava farmers and the rural poor, as well as enhance the-shelf life of products, facilitate their transportation, increase marketing opportunities, and help improve human and livestock nutrition. This paper presents a general overview of crude ways in cassava processing and utilization methods now used by subsistence and small-scale farmers in Ngwo village of the North West region in Cameroon, and examine the opportunities of improving processing technologies. Cassava needs processing because the roots cannot be stored for long because they rot within 3-4 days of harvest. They are bulky with about 70% moisture content, and therefore transportation of the tubers to markets is difficult and expensive. The roots and leaves contain varying amounts of cyanide which is toxic to humans and animals, while the raw cassava roots and uncooked leaves are not palatable. Therefore, cassava must be processed into various forms in order to increase the shelf life of the products, facilitate transportation and marketing, reduce cyanide content and improve palatability.Keywords: cassava roots, crude ways, food system, poverty
Procedia PDF Downloads 166505 Malaysia as a Case Study for Climate Policy Integration into Energy Policy
Authors: Marcus Lee
Abstract:
The energy sector is the largest contributor of greenhouse gas emissions in Malaysia, which induces climate change. The climate change problem is therefore an energy sector problem. Tackling climate change issues successfully is contingent on actions taken in the energy sector. The researcher propounds that ‘Climate Policy Integration’ (CPI) into energy policy is a viable and insufficiently developed strategy in Malaysia that promotes the synergies between climate change and energy objectives, in order to achieve the targets found in both climate change and energy policies. In exploring this hypothesis, this paper presentation will focus on two particular aspects. Firstly, the meaning of CPI as an approach and as a concept will be explored. As an approach, CPI into energy policy means the integration of climate change objectives into the energy policy area. Its subject matter focuses on establishing the functional interrelations between climate change and energy objectives, by promoting their synergies and minimising their contradictions. However, its conceptual underpinnings are less than straightforward. Drawing from the ‘principle of integration’ found in international treaties and declarations such as the Stockholm Declaration 1972, the Rio Declaration 1992 and the United Nations Framework on Climate Change 1992 (‘UNFCCC’), this paper presentation will explore the contradictions in international standards on how the sustainable development tenets of environmental sustainability, social development and economic development are to be balanced and its relevance to CPI. Further, the researcher will consider whether authority may be derived from international treaties and declarations in order to argue for the prioritisation of environmental sustainability over the other sustainable development tenets through CPI. Secondly, this paper presentation will also explore the degree to which CPI into energy policy has been achieved and pursued in Malaysia. In particular, the strength of the conceptual framework with regard to CPI in Malaysian governance will be considered by assessing Malaysia’s National Policy on Climate Change (2009) (‘NPCC 2009’). The development (or the lack of) of CPI as an approach since the publication of the NPCC 2009 will also be assessed based on official government documents and policies that may have a climate change and/or energy agenda. Malaysia’s National Renewable Energy Policy and Action Plan (2010), draft National Energy Efficiency Action Plan (2014), Intended Nationally Determined Contributions (2015) in relation to the Paris Agreement, 11th Malaysia Plan (2015) and Biennial Update Report to the UNFCCC (2015) will be discussed. These documents will be assessed for the presence of CPI based on the language/drafting of the documents as well as the degree of subject matter regarding CPI expressed in the documents. Based on the analysis, the researcher will propose solutions on how to improve Malaysia’s climate change and energy governance. The theory of reflexive governance will be applied to CPI. The concluding remarks will be about whether CPI reflects reflexive governance by demonstrating how the governance process can be the object of shaping outcomes.Keywords: climate policy integration, mainstreaming, policy coherence, Malaysian energy governance
Procedia PDF Downloads 197