Search results for: factor models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11542

Search results for: factor models

352 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice

Authors: Anastasia Schnitzer, Oliver Rehren

Abstract:

Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.

Keywords: prejudice, quality of contact, parasocial contact, narrative framework

Procedia PDF Downloads 83
351 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers

Authors: Navah Z. Ratzon, Rachel Shichrur

Abstract:

Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.

Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention

Procedia PDF Downloads 346
350 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 386
349 Harnessing Artificial Intelligence for Early Detection and Management of Infectious Disease Outbreaks

Authors: Amarachukwu B. Isiaka, Vivian N. Anakwenze, Chinyere C. Ezemba, Chiamaka R. Ilodinso, Chikodili G. Anaukwu, Chukwuebuka M. Ezeokoli, Ugonna H. Uzoka

Abstract:

Infectious diseases continue to pose significant threats to global public health, necessitating advanced and timely detection methods for effective outbreak management. This study explores the integration of artificial intelligence (AI) in the early detection and management of infectious disease outbreaks. Leveraging vast datasets from diverse sources, including electronic health records, social media, and environmental monitoring, AI-driven algorithms are employed to analyze patterns and anomalies indicative of potential outbreaks. Machine learning models, trained on historical data and continuously updated with real-time information, contribute to the identification of emerging threats. The implementation of AI extends beyond detection, encompassing predictive analytics for disease spread and severity assessment. Furthermore, the paper discusses the role of AI in predictive modeling, enabling public health officials to anticipate the spread of infectious diseases and allocate resources proactively. Machine learning algorithms can analyze historical data, climatic conditions, and human mobility patterns to predict potential hotspots and optimize intervention strategies. The study evaluates the current landscape of AI applications in infectious disease surveillance and proposes a comprehensive framework for their integration into existing public health infrastructures. The implementation of an AI-driven early detection system requires collaboration between public health agencies, healthcare providers, and technology experts. Ethical considerations, privacy protection, and data security are paramount in developing a framework that balances the benefits of AI with the protection of individual rights. The synergistic collaboration between AI technologies and traditional epidemiological methods is emphasized, highlighting the potential to enhance a nation's ability to detect, respond to, and manage infectious disease outbreaks in a proactive and data-driven manner. The findings of this research underscore the transformative impact of harnessing AI for early detection and management, offering a promising avenue for strengthening the resilience of public health systems in the face of evolving infectious disease challenges. This paper advocates for the integration of artificial intelligence into the existing public health infrastructure for early detection and management of infectious disease outbreaks. The proposed AI-driven system has the potential to revolutionize the way we approach infectious disease surveillance, providing a more proactive and effective response to safeguard public health.

Keywords: artificial intelligence, early detection, disease surveillance, infectious diseases, outbreak management

Procedia PDF Downloads 66
348 Application of Flow Cytometry for Detection of Influence of Abiotic Stress on Plants

Authors: Dace Grauda, Inta Belogrudova, Alexei Katashev, Linda Lancere, Isaak Rashal

Abstract:

The goal of study was the elaboration of easy applicable flow cytometry method for detection of influence of abiotic stress factors on plants, which could be useful for detection of environmental stresses in urban areas. The lime tree Tillia vulgaris H. is a popular tree species used for urban landscaping in Europe and is one of the main species of street greenery in Riga, Latvia. Tree decline and low vitality has observed in the central part of Riga. For this reason lime trees were select as a model object for the investigation. During the period of end of June and beginning of July 12 samples from different urban environment locations as well as plant material from a greenhouse were collected. BD FACSJazz® cell sorter (BD Biosciences, USA) with flow cytometer function was used to test viability of plant cells. The method was based on changes of relative fluorescence intensity of cells in blue laser (488 nm) after influence of stress factors. SpheroTM rainbow calibration particles (3.0–3.4 μm, BD Biosciences, USA) in phosphate buffered saline (PBS) were used for calibration of flow cytometer. BD PharmingenTM PBS (BD Biosciences, USA) was used for flow cytometry assays. The mean fluorescence intensity information from the purified cell suspension samples was recorded. Preliminary, multiple gate sizes and shapes were tested to find one with the lowest CV. It was found that low CV can be obtained if only the densest part of plant cells forward scatter/side scatter profile is analysed because in this case plant cells are most similar in size and shape. The young pollen cells in one nucleus stage were found as the best for detection of influence of abiotic stress. For experiments only fresh plant material was used– the buds of Tillia vulgaris with diameter 2 mm. For the cell suspension (in vitro culture) establishment modified protocol of microspore culture was applied. The cells were suspended in the MS (Murashige and Skoog) medium. For imitation of dust of urban area SiO2 nanoparticles with concentration 0.001 g/ml were dissolved in distilled water. Into 10 ml of cell suspension 1 ml of SiO2 nanoparticles suspension was added, then cells were incubated in speed shaking regime for 1 and 3 hours. As a stress factor the irradiation of cells for 20 min by UV was used (Hamamatsu light source L9566-02A, L10852 lamp, A10014-50-0110), maximum relative intensity (100%) at 365 nm and at ~310 nm (75%). Before UV irradiation the suspension of cells were placed onto a thin layer on a filter paper disk (diameter 45 mm) in a Petri dish with solid MS media. Cells without treatment were used as a control. Experiments were performed at room temperature (23-25 °C). Using flow cytometer BS FACS Software cells plot was created to determine the densest part, which was later gated using oval-shaped gate. Gate included from 95 to 99% of all cells. To determine relative fluorescence of cells logarithmic fluorescence scale in arbitrary fluorescence units were used. 3x103 gated cells were analysed from the each sample. The significant differences were found among relative fluorescence of cells from different trees after treatment with SiO2 nanoparticles and UV irradiation in comparison with the control.

Keywords: flow cytometry, fluorescence, SiO2 nanoparticles, UV irradiation

Procedia PDF Downloads 412
347 Spatial Analysis and Determinants of Number of Antenatal Health Care Visit Among Pregnant Women in Ethiopia: Application of Spatial Multilevel Count Regression Models

Authors: Muluwerk Ayele Derebe

Abstract:

Background: Antenatal care (ANC) is an essential element in the continuum of reproductive health care for preventing preventable pregnancy-related morbidity and mortality. Objective: The aim of this study is to assess the spatial pattern and predictors of ANC visits in Ethiopia. Method: This study was done using Ethiopian Demographic and Health Survey data of 2016 among 7,174 pregnant women aged 15-49 years which was a nationwide community-based cross-sectional survey. Spatial analysis was done using Getis-Ord Gi* statistics to identify hot and cold spot areas of ANC visits. Multilevel glmmTMB packages adjusted for spatial effects were used in R software. Spatial multilevel count regression was conducted to identify predictors of antenatal care visits for pregnant women, and proportional change in variance was done to uncover the effect of individual and community-level factors of ANC visits. Results: The distribution of ANC visits was spatially clustered Moran’s I = 0.271, p<.0.001, ICC = 0.497, p<0.001). The highest spatial outlier areas of ANC visit was found in Amhara (South Wollo, Weast Gojjam, North Shewa), Oromo (west Arsi and East Harariga), Tigray (Central Tigray) and Benishangul-Gumuz (Asosa and Metekel) regions. The data was found with excess zeros (34.6%) and over-dispersed. The expected ANC visit of pregnant women with pregnancy complications was higher at 0.7868 [ARR= 2.1964, 95% CI: 1.8605, 2.5928, p-value <0.0001] compared to pregnant women who had no pregnancy complications. The expected ANC visit of a pregnant woman who lived in a rural area was 1.2254 times higher [ARR=3.4057, 95% CI: 2.1462, 5.4041, p-value <0.0001] as compared to a pregnant woman who lived in an urban. The study found dissimilar clusters with a low number of zero counts for a mean number of ANC visits surrounded by clusters with a higher number of counts of an average number of ANC visits when other variables held constant. Conclusion: This study found that the number of ANC visits in Ethiopia had a spatial pattern associated with socioeconomic, demographic, and geographic risk factors. Spatial clustering of ANC visits exists in all regions of Ethiopia. The predictor age of the mother, religion, mother’s education, husband’s education, mother's occupation, husband's occupation, signs of pregnancy complication, wealth index and marital status had a strong association with the number of ANC visits by each individual. At the community level, place of residence, region, age of the mother, sex of the household head, signs of pregnancy complications and distance to health facility factors had a strong association with the number of ANC visits.

Keywords: Ethiopia, ANC, spatial, multilevel, zero inflated Poisson

Procedia PDF Downloads 74
346 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 105
345 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater

Authors: Emmanuel C. Ngerem

Abstract:

The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.

Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization

Procedia PDF Downloads 40
344 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 183
343 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 266
342 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning

Authors: John Zanetich

Abstract:

Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.

Keywords: tacit knowledge, knowledge management, college programs, experiential learning

Procedia PDF Downloads 262
341 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 210
340 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education

Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg

Abstract:

In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.

Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development

Procedia PDF Downloads 115
339 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 39
338 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 156
337 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures

Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai

Abstract:

In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.

Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads

Procedia PDF Downloads 150
336 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 89
335 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions

Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes

Abstract:

The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.

Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning

Procedia PDF Downloads 72
334 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 112
333 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 110
332 Screens Design and Application for Sustainable Buildings

Authors: Fida Isam Abdulhafiz

Abstract:

Traditional vernacular architecture in the United Arab Emirates constituted namely of adobe houses with a limited number of openings in their facades. The thick mud and rubble walls and wooden window screens protected its inhabitants from the harsh desert climate and provided them with privacy and fulfilled their comfort zone needs to an extent. However, with the rise of the immediate post petroleum era reinforced concrete villas with glass and steel technology has replaced traditional vernacular dwellings. And more load was put on the mechanical cooling systems to ensure the satisfaction of today’s more demanding doweling inhabitants. However, In the early 21at century professionals started to pay more attention to the carbon footprint caused by the built constructions. In addition, many studies and innovative approaches are now dedicated to lower the impact of the existing operating buildings on their surrounding environments. The UAE government agencies started to regulate that aim to revive sustainable and environmental design through Local and international building codes and urban design policies such as Estidama and LEED. The focus in this paper is on the reduction of the emissions resulting from the use of energy sources in the cooling and heating systems, and that would be through using innovative screen designs and façade solutions to provide a green footprint and aesthetic architectural icons. Screens are one of the popular innovative techniques that can be added in the design process or used in existing building as a renovation techniques to develop a passive green buildings. Preparing future architects to understand the importance of environmental design was attempted through physical modelling of window screens as an educational means to combine theory with a hands on teaching approach. Designing screens proved to be a popular technique that helped them understand the importance of sustainable design and passive cooling. After creating models of prototype screens, several tests were conducted to calculate the amount of Sun, light and wind that goes through the screens affecting the heat load and light entering the building. Theory further explored concepts of green buildings and material that produce low carbon emissions. This paper highlights the importance of hands on experience for student architects and how physical modelling helped rise eco awareness in Design studio. The paper will study different types of façade screens and shading devices developed by Architecture students and explains the production of diverse patterns for traditional screens by student architects based on sustainable design concept that works properly with the climate requirements in the Middle East region.

Keywords: building’s screens modeling, façade design, sustainable architecture, sustainable dwellings, sustainable education

Procedia PDF Downloads 298
331 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 23
330 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 46
329 Comparative Production of Secondary Metabolites by Prunus africana (Hook. F.) Kalkman Provenances in Cameroon and Some Associated Endophytic Fungi

Authors: Gloria M. Ntuba-Jua, Afui M. Mih, Eneke E. T. Bechem

Abstract:

Prunus africana (Hook. F.) Kalkman, commonly known as Pygeum or African cherry belongs to the Rosaceae family. It is a medium to large, evergreen tree with a spreading crown of 10 to 20 m. It is used by the traditional medical practitioners for the treatment of over 45ailments in Cameroon and sub-Sahara Africa. In modern medicine, it is used in the treatment of benign prostrate hyperplasia (BPH), prostate gland hypertrophy (enlarged prostate glands). This is possible because of its ability to produce some secondary metabolites which are believed to have bioactivity against these ailments. The ready international market for the sale of Prunus bark, uncontrolled exploitation, illegal harvesting using inappropriate techniques and poor timing of harvesting have contributed enormously to making the plant endangered. It is known to harbor a large number of endophytic fungi with the potential to produce similar secondary metabolites as the parent plant. Alternative sourcing of medicinal principles through endophytic fungi requires succinct knowledge of the endophytic fungi. This will serve as a conservation measure for Prunus africana by reducing dependence on Prunus bark for such metabolites. This work thus sought to compare the production of some major secondary metabolites produced by P. africana and some of its associated endophytic fungi. The leaves and stem bark of the plant from different provenances were soaked in methanol for 72 hrs to yield the methanolic crude extract. The phytochemical screening of the methanolic crude extracts using different standard procedures revealed the presence of tannins, flavonoids, terpenoids, saponins, phenolics and steroids. Pure cultures of some predominantly isolated endophyte species from the difference Prunus provenances such as Curvularia sp, and Morphospecies P001 were also grown in Potato Dextrose Broth (PDB) for 21 days and later extracted with Methylene dichloride (MDC) solvent after 24hrs to produce crude culture extracts. Qualitative assessment of crude culture extracts showed the presence of tannins, terpenoids, phenolics and steroids particularly β-Sitosterol, (a major bioactive metabolite) as did the plant tissues. Qualitative analysis by thin layer chromatography (TLC) was done to confirm and compare the production of β-Sitosterol (as marker compounds) in the crude extracts of the plant and endophyte. Samples were loaded on TLC silica gel aluminium barked plate (Kieselgel 60 F254, 0.2 mm, Merck) using acetone/hexane, (3.0:7.0) solvent system. They were visualized under an ultra violet lamp (UV254 and UV360). TLC revealed that leaves had a higher concentration of β-sitosterol in terms of band intensity than stem barks from the different provenances. The intensity of β-sitosterol bands in the culture extracts of endophytes was comparable to the plant extracts except for Curvularia sp (very minute) whose band was very faint. The ability of these fungi to make β-sitosterol was confirmed by TLC analysis with the compound having chromatographic properties (retention factor) similar to those of β-sitosterol standard. The ability of these major endophytes to produce secondary metabolites similar to the host has therefore been demonstrated. There is, therefore, the potential of developing the in vitro production system of Prunus secondary metabolites thereby enhancing its conservation.

Keywords: Caneroon, endophytic fungi, Prunus africana, secondary metabolite

Procedia PDF Downloads 230
328 A Rare Case of Dissection of Cervical Portion of Internal Carotid Artery, Diagnosed Postpartum

Authors: Bidisha Chatterjee, Sonal Grover, Rekha Gurung

Abstract:

Postpartum dissection of the internal carotid artery is a relatively rare condition and is considered as an underlying aetiology in 5% to 25% of strokes under the age of 30 to 45 years. However, 86% of these cases recover completely and 14% have mild focal neurological symptoms. Prognosis is generally good with early intervention. The risk quoted for a repeat carotid artery dissection in subsequent pregnancies is less than 2%. 36-year Caucasian primipara presented on postnatal day one of forceps delivery with tachycardia. In the intrapartum period she had a history of prolonged rupture of membranes and developed intrapartum sepsis and was treated with antibiotics. Postpartum ECG showed septal inferior T wave inversion and a troponin level of 19. Subsequently Echocardiogram ruled out post-partum cardiomyopathy. Repeat ECG showed improvement of the previous changes and in the absence of symptoms no intervention was warranted. On day 4 post-delivery, she had developed symptoms of droopy right eyelid, pain around the right eye and itching in the right ear. On examination, she had developed right sided ptosis, unequal pupils (Rt miotic pupil). Cranial nerve examination, reflexes, sensory examination and muscle power was normal. Apart from migraine, there was no medical or family history of note. In view of Horner’s on the right, she had a CT Angiogram and subsequently MR/MRA and was diagnosed with dissection of the cervical portion of the right internal carotid artery. She was discharged on a course of Aspirin 75mg. By 6 week post-natal follow up patient had recovered significantly with occasional episodes of unequal pupils and tingling of right toes which resolved spontaneously. Cervical artery dissection, including VAD and carotid artery dissection, are rare complications of pregnancy with an estimated annual incidence of 2.6–3 per 100,000 pregnancy hospitalizations. Aetiology remains unclear though trauma during straining at labour, underlying arterial disease and preeclampsia have been implicated. Hypercoagulable state during pregnancy and puerperium could also be an important factor. 60-90% cases present with severe headache and neck pain and generally precede neurological symptoms like ipsilateral Horner’s syndrome, retroorbital pain, tinnitus and cranial nerve palsy. Although rare, the consequences of delayed diagnosis and management can lead to severe and permanent neurological deficits. Patients with a strong index of suspicion should undergo an MRI or MRA of head and neck. Antithrombotic and antiplatelet therapy forms the mainstay of therapy with selected cases needing endovascular stenting. Long term prognosis is favourable with either complete resolution or minimal deficit if treatment is prompt. Patients should be counselled about the recurrence risk and possibility of stroke in future pregnancy. Coronary artery dissection is rare and treatable but needs early diagnosis and treatment. Post-partum headache and neck pain with neurological symptoms should prompt urgent imaging followed by antithrombotic and /or antiplatelet therapy. Most cases resolve completely or with minimal sequelae.

Keywords: postpartum, dissection of internal carotid artery, magnetic resonance angiogram, magnetic resonance imaging, antiplatelet, antithrombotic

Procedia PDF Downloads 97
327 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century

Authors: G. Iacoviello, A. Lazzini

Abstract:

This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.

Keywords: education system, government, knowledge, power

Procedia PDF Downloads 139
326 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
325 MTT Assay-Guided Isolation of a Cytotoxic Lead from Hedyotis umbellata and Its Mechanism of Action against Non-Small Cell Lung Cancer A549 Cells

Authors: Kirti Hira, A. Sajeli Begum, S. Mahibalan, Poorna Chandra Rao

Abstract:

Introduction: Cancer is one of the leading causes of death worldwide. Although existing therapy effectively kills cancer cells, they do affect normal growing cells leading to many undesirable side effects. Hence there is need to develop effective as well as safe drug molecules to combat cancer, which is possible through phyto-research. The currently available plant-derived blockbuster drugs are the example for this. In view of this, an investigation was done to identify cytotoxic lead molecules from Hedyotis umbellata (Family Rubiaceae), a widely distributed weed in India. Materials and Methods: The methanolic extract of the whole plant of H. umbellata (MHU), prepared through Soxhlet extraction method was further fractionated with diethyl ether and n-butanol, successively. MHU, ether fraction (EMHU) and butanol fraction (BMHU) were lyophilized and were tested for the cytotoxic effect using 3-(4,5-Dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay against non-small cell lung cancer (NSCLC) A549 cell lines. The potentially active EMHU was subjected to chromatographic purification using normal-phase silica columns, in order to isolate the responsible bioactive compounds. The isolated pure compounds were tested for their cytotoxic effect by MTT assay against A549 cells. Compound-3, which was found to be most active, was characterized using IR, 1H- and 13C-NMR and MS analysis. The study was further extended to decipher the mechanism of action of cytotoxicity of compound-3 against A549 cells through various in vitro cellular models. Cell cycle analysis was done using flow cytometry following PI (Propidium Iodide) staining. Protein analysis was done using Western blot technique. Results: Among MHU, EMHU, and BMHU, the non-polar fraction EMHU demonstrated a significant dose-dependent cytotoxic effect with IC50 of 67.7μg/ml. Chromatography of EMHU yielded seven compounds. MTT assay of isolated compounds explored compound-3 as potentially active one, which inhibited the growth of A549 cells with IC50value of 14.2μM. Further, compound-3 was identified as cedrelopsin, a coumarin derivative having molecular weight of 260. Results of in vitro mechanistic studies explained that cedrelopsin induced cell cycle arrest at G2/M phase and down-regulated the expression of G2/M regulatory proteins such as cyclin B1, cdc2, and cdc25C, dose dependently. This is the first report that explores the cytotoxic mechanism of cedrelopsin. Conclusion: Thus a potential small lead molecule, cedrelopsin isolated from H. umbellata, showing antiproliferative effect mediated by G2/M arrest in A549 cells was discovered. The effect of cedrelopsin against other cancer cell lines followed by in vivo studies can be performed in future to develop a new drug candidate.

Keywords: A549, cedrelopsin, G2/M phase, Hedyotis umbellata

Procedia PDF Downloads 175
324 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.

Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building

Procedia PDF Downloads 178
323 Need for Elucidation of Palaeoclimatic Variability in the High Himalayan Mountains: A Multiproxy Approach

Authors: Sheikh Nawaz Ali, Pratima Pandey, P. Morthekai, Jyotsna Dubey, Md. Firoze Quamar

Abstract:

The high mountain glaciers are one of the most sensitive recorders of climate changes, because they have the tendency to respond to the combined effect of snow fall and temperature. The Himalayan glaciers have been studied with a good pace during the last decade. However, owing to its large ecological diversity and geographical vividness, major part of the Indian Himalaya is uninvestigated, and hence the palaeoclimatic patterns as well as the chronology of past glaciations in particular remain controversial for the entire Indian Himalayan transect. Although the Himalayan glaciers are nourished by two important climatic systems viz. the southwest summer monsoon and the mid-latitude westerlies, however, the influence of these systems is yet to be understood. Nevertheless, existing chronology (mostly exposure ages) indicate that irrespective of the geographical position, glaciers seem to grow during enhanced Indian summer monsoon (ISM). The Himalayan mountain glaciers are referred to the third pole or water tower of Asia as they form a huge reservoir of the fresh water supplies for the Asian countries. Mountain glaciers are sensitive probes of the local climate, and, thus, they present an opportunity and a challenge to interpret climates of the past as well as to predict future changes. The principle object of all the palaeoclimatic studies is to develop a futuristic models/scenario. However, it has been found that the glacial chronologies bracket the major phases of climatic events only, and other climatic proxies are sparse in Himalaya. This is the reason that compilation of data for rapid climatic change during the Holocene shows major gaps in this region. The sedimentation in proglacial lakes, conversely, is more continuous and, hence, can be used to reconstruct a more complete record of past climatic variability that is modulated by changing ice volume of the valley glacier. The Himalayan region has numerous proglacial lacustrine deposits formed during the late Quaternary period. However, there are only few such deposits which have been studied so far. Therefore, this is the high time when efforts have to be made to systematically map the moraines located in different climatic zones, reconstruct the local and regional moraine stratigraphy and use multiple dating techniques to bracket the events of glaciation. Besides this, emphasis must be given on carrying multiproxy studies on the lacustrine sediments that will provide a high resolution palaeoclimatic data from the alpine region of the Himalaya. Although the Himalayan glaciers fluctuated in accordance with the changing climatic conditions (natural forcing), however, it is too early to arrive at any conclusion. It is very crucial to generate multiproxy data sets covering wider geographical and ecological domains taking into consideration multiple parameters that directly or indirectly influence the glacier mass balance as well as the local climate of a region.

Keywords: glacial chronology, palaeoclimate, multiproxy, Himalaya

Procedia PDF Downloads 263