Search results for: wave function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6073

Search results for: wave function

1363 The Prodomain-Bound Form of Bone Morphogenetic Protein 10 is Biologically Active on Endothelial Cells

Authors: Austin Jiang, Richard M. Salmon, Nicholas W. Morrell, Wei Li

Abstract:

BMP10 is highly expressed in the developing heart and plays essential roles in cardiogenesis. BMP10 deletion in mice results in embryonic lethality due to impaired cardiac development. In adults, BMP10 expression is restricted to the right atrium, though ventricular hypertrophy is accompanied by increased BMP10 expression in a rat hypertension model. However, reports of BMP10 activity in the circulation are inconclusive. In particular it is not known whether in vivo secreted BMP10 is active or whether additional factors are required to achieve its bioactivity. It has been shown that high-affinity binding of the BMP10 prodomain to the mature ligand inhibits BMP10 signaling activity in C2C12 cells, and it was proposed that prodomain-bound BMP10 (pBMP10) complex is latent. In this study, we demonstrated that the BMP10 prodomain did not inhibit BMP10 signaling activity in multiple endothelial cells, and that recombinant human pBMP10 complex, expressed in mammalian cells and purified under native conditions, was fully active. In addition, both BMP10 in human plasma and BMP10 secreted from the mouse right atrium were fully active. Finally, we confirmed that active BMP10 secreted from mouse right atrium was in the prodomain-bound form. Our data suggest that circulating BMP10 in adults is fully active and that the reported vascular quiescence function of BMP10 in vivo is due to the direct activity of pBMP10 and does not require an additional activation step. Moreover, being an active ligand, recombinant pBMP10 may have therapeutic potential as an endothelial-selective BMP ligand, in conditions characterized by loss of BMP9/10 signaling.

Keywords: bone morphogenetic protein 10 (BMP10), endothelial cell, signal transduction, transforming growth factor beta (TGF-B)

Procedia PDF Downloads 252
1362 Cement Bond Characteristics of Artificially Fabricated Sandstones

Authors: Ashirgul Kozhagulova, Ainash Shabdirova, Galym Tokazhanov, Minh Nguyen

Abstract:

The synthetic rocks have been advantageous over the natural rocks in terms of availability and the consistent studying the impact of a particular parameter. The artificial rocks can be fabricated using variety of techniques such as mixing sand and Portland cement or gypsum, firing the mixture of sand and fine powder of borosilicate glass or by in-situ precipitation of calcite solution. In this study, sodium silicate solution has been used as the cementing agent for the quartz sand. The molded soft cylindrical sandstone samples are placed in the gas-tight pressure vessel, where the hardening of the material takes place as the chemical reaction between carbon dioxide and the silicate solution progresses. The vessel allows uniform disperse of carbon dioxide and control over the ambient gas pressure. Current paper shows how the bonding material is initially distributed in the intergranular space and the surface of the sand particles by the usage of Electron Microscopy and the Energy Dispersive Spectroscopy. During the study, the strength of the cement bond as a function of temperature is observed. The impact of cementing agent dosage on the micro and macro characteristics of the sandstone is investigated. The analysis of the cement bond at micro level helps to trace the changes to particles bonding damage after a potential yielding. Shearing behavior and compressional response have been examined resulting in the estimation of the shearing resistance and cohesion force of the sandstone. These are considered to be main input values to the mathematical prediction models of sand production from weak clastic oil reservoir formations.

Keywords: artificial sanstone, cement bond, microstructure, SEM, triaxial shearing

Procedia PDF Downloads 143
1361 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 36
1360 In vitro Analysis of the Effect of Supplementation Oils on Conjugated Linoleic Acid Production by Butyvibrio Fibrisolvense

Authors: B. D. Ravindra, A. K. Tyagi, C. Kathirvelan

Abstract:

Some micronutrients in food (milk and meat), called ‘functional food components’ exert beneficial effects other than their routine nutrient function and conjugated linoleic acid (CLA) is an unsaturated fatty acid of ruminant origin, an example of this category. However, recently the fear of hypercholesterolemia due to saturated fats has led to the avoidance of dietary fat especially of animal origin despite its advantages such as lowering blood cholesterol, immuno-modulation and anticarcinogenic property due to the presence of CLA. The dietary increase of linoleic acid (LA) and linolenic acid (LNA) is one of the feeding strategies for increasing the CLA concentration in milk. Butyrivibrio fibrisolvens is the one potential rumen bacteria, which has high potential to isomerize LA to CLA. The study was conducted to screen the different oils for CLA production, selected based on their LA concentration. Butyrivibrio fibrisolvens culture (strain 49, MZ3, 30/10) were isolated from the rumen liquor of fistulated Buffalo (age ≈ 3 years; weight ≈ 250 kg) were used in in-vitro experiments, further work was carried out with three oils viz., sunflower, mustard and soybean oil at different concentration (0.05, 0.1, 0.15, 0.2, 0.25 and 0.3 g/L of media) to study the growth of bacteria and CLA production at different incubation period (0, 8, 12, 18, 24, 48, 72 h). In the present study, growth of the bacteria was decreased linearly with increase in concentration of three oils. However, highest decrease in growth was recorded at the concentration of 0.30 g of three oils per litre of the media. Highest CLA production was 51.96, 42.08 and 25.60 µg/ml at 0.25 g and it decreased to 48.19, 39.35 and 23.41 µg/ml at 0.3 g supplementation of sunflower, soybean, and mustard oil per litre of the media, respectively at 18 h incubation period. The present study indicates the Butyrivibrio fibrisolvens bacteria involves in the biohydrogenation process, and LA rich sunflower meal can be used to improve the CLA production in rumen and thereby increasing the CLA concentration of milk.

Keywords: Butyrivibrio fibrisolvens, CLA, fatty acids, sunflower oil

Procedia PDF Downloads 350
1359 An Integrated CFD and Experimental Analysis on Double-Skin Window

Authors: Sheam-Chyun Lin, Wei-Kai Chen, Hung-Cheng Yen, Yung-Jen Cheng, Yu-Cheng Chen

Abstract:

Result from the constant dwindle in natural resources, the alternative way to reduce the costs in our daily life would be urgent to be found in the near future. As the ancient technique based on the theory of solar chimney since roman times, the double-skin façade are simply composed of two large glass panels in purpose of daylighting and also natural ventilation in the daytime. Double-skin façade is generally installed on the exterior side of buildings as function as the window, so there’s always a huge amount of passive solar energy the façade would receive to induce the airflow every sunny day. Therefore this article imposes a domestic double-skin window for residential usage and attempts to improve the volume flow rate inside the cavity between the panels by the frame geometry design, the installation of outlet guide plate and the solar energy collection system. Note that the numerical analyses are applied to investigate the characteristics of flow field, and the boundary conditions in the simulation are totally based on the practical experiment of the original prototype. Then we redesign the prototype from the knowledge of the numerical results and fluid dynamic theory, and later the experiments of modified prototype will be conducted to verify the simulation results. The velocities at the inlet of each case are increase by 5%, 45% and 15% from the experimental data, and also the numerical simulation results reported 20% improvement in volume flow rate both for the frame geometry design and installation of outlet guide plate.

Keywords: solar energy, double-skin façades, thermal buoyancy, fluid machinery

Procedia PDF Downloads 467
1358 Pulmonary Embolism Indicative of Myxoma of the Right Atrium

Authors: A. Kherraf, M. Bouziane, A. Drighil, L. Azzouzi, R. Habbal

Abstract:

Objective: Myxomas are rare heart tumors most commonly found in the left atrium. The purpose of this observation is to report a rare case of myxoma of the right atrium revealed by pulmonary embolism. Observation: A 34-year-old patient with no history presented to the emergency room with sudden onset dyspnea. Clinical examination showed arterial pressure at 110/70mmHg, tachycardia at 110bpm, and 90% oxygen saturation. The ECG enrolled in incomplete right bundle branch block. The radio-thorax was normal. Echocardiography revealed the presence of a large homogeneous intra-OD mass, contiguous to the inter-atrial septum, prolapsing through the tricuspid valve, and causing mild tricuspid insufficiency, with dilation of the right ventricle and retained systolic function with PAPs estimated at 45mmHg. A chest scan was performed, revealing the presence of right segmental pulmonary embolism. The patient was put under anticoagulant and underwent surgical resection of the mass; its pathological examination concluded to a myxoma. The post-operative consequences were simple, without recurrence of the mass after one year follow-up. Discussion: Myxomas represent 50% of heart tumors. Most often, they originate in the left atrium, and more rarely in the right atrium or the ventricles. Myxoma of the right atrium can be responsible for life-threatening pulmonary embolism. The most predictive factor for embolization remains the morphology of the myxomas; papillary or villous myxomas are the most friable. Surgery is the standard treatment, with regular postoperative follow-up to detect recurrence. Conclusion: Myxomas of the right atrium are a rare location for these tumors. Pulmonary embolism is the main complication and should routinely involve careful study of the right chambers on echocardiography.

Keywords: pulmonary embolism, myxoma, right atrium, heart tumors

Procedia PDF Downloads 116
1357 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.

Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States

Procedia PDF Downloads 329
1356 Splinting in Plastic Surgery Hand Trauma Setting

Authors: Samar Mousa, Rebecca Shirley

Abstract:

Injuries to the hand account for 20% of all emergency department attendances, with an estimated annual treatment cost of over £100 million in the UK. Functional impairments as a result of hand injuries often necessitate absence from employment, resulting in reduced productivity estimated to incur an additional £600m loss to the UK economy. Appropriate and early management is vital to preserve anatomy, prevent stiffness and allow function. The initial assessment and management of hand injuries are usually undertaken by junior staff, many of whom have little or no training or experience in splinting hand fractures. In our plastic surgery department at Stoke Mandeville hospital Buckinghamshire trust, we carried out an audit project to detect errors in hand splinting in the period between April 2022 and July 2022 and find out measures to support junior doctors, nurses and hand therapists in providing the best possible care for hand trauma patients. Our standards were The British Society for Surgery of the Hand (BSSH) standard of care in hand trauma, AO surgery reference and Stoke Mandeville hospital hand therapy mini protocol Feb 2022 During the period of 4 months, 5 cases were identified. Two cases of wrong splint choice, two cases of early removal of the splint and one tight splint that required change. In order to avoid those mistakes, a training program was given to junior doctors and nurses in collaboration with the hand therapy team regarding ways of splinting the hand in different injuries like fractures, tendons injuries, muscle injuries and ligamentous injuries. In addition to, a poster hung in the examination rooms and theatres to help junior doctors reach the correct decision.

Keywords: splinting, hand trauma, plastic surgery, tendon injury, hand fracrture

Procedia PDF Downloads 64
1355 Application of Non-Smoking Areas in Hospitals

Authors: Nur Inayah Ismaniar, Sukri Palutturi, Ansariadi, Atjo Wahyu

Abstract:

Background: In various countries in the world, the problem of smoking is now considered something serious because of the effects of smoking which can not only lead to addiction but also have the potential to harm health. Public health authorities have concluded that one solution that can be done to protect the public from active smokers is to issue a policy that requires public facilities to be completely smoke-free. The hospital is one of the public facilities that has been designated as a smoke-free area. However, the implementation and maintenance of a successful program based on a smoke-free hospital are still considered an ongoing challenge worldwide due to the very low level of adherence. The low level of compliance with this smoke-free policy is also seen in other public facilities. The purpose of the literature review is to review the level of compliance with the application of the Non-Smoking Area policy, how this policy has succeeded in reducing smoking activity in hospitals, and what factors lead to such compliance in each country in the world. Methods: A literature review of articles was carried out on all types of research methods, both qualitative and quantitative. The sample is all subjects who are in the research location, which includes patients, staff and hospital visitors. Results: Various variations in the level of compliance were found in various kinds of literature. The literature with the highest level of compliance is 88.4%. Furthermore, several determinants that are known to affect the compliance of the Non-Smoking Area policies in hospitals include communication, information, knowledge, perceptions, interventions, attitudes and support. Obstacles to its enforcement are the absence of sanctions against violators of the Non-Smoking Area policy, the ineffectiveness of the function of policymakers in hospitals, and negative perceptions of smoking related to mental health. Conclusion: Violations of the Non-Smoking Area policy are often committed by the hospital staff themselves, which makes it difficult for this policy to be fully enforced at various points in the hospital.

Keywords: health policy, non-smoking area, hospital, implementation

Procedia PDF Downloads 65
1354 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 108
1353 Ultrasound-Assisted Sol – Gel Synthesis of Nano-Boehmite for Biomedical Purposes

Authors: Olga Shapovalova, Vladimir Vinogradov

Abstract:

Among many different sol – gel matrices only alumina can be successfully parenteral injected in the human body. And this is not surprising, because boehmite (aluminium oxyhydroxide) is the metal oxide approved by FDA and EMA for intravenous and intramuscular administrations, and also has been using for a longtime as adjuvant for producing of many modern vaccines. In our earlier study, it has been shown, that denaturation temperature of enzymes entrapped in sol-gel boehmite matrix increases for 30 – 60 °С with preserving of initial activity. It makes such matrices more attractive for long-term storage of non-stable drugs. In current work we present ultrasound-assisted sol-gel synthesis of nano-boehmite. This method provides bio-friendly, very stable, highly homogeneous alumina sol with using only water and aluminium isopropoxide as a precursor. Many parameters of the synthesis were studied in details: time of ultrasound treatment, US frequency, surface area, pore and nanoparticle size, zeta potential and others. Here we investigated the dependence of stability of colloidal sols and textural properties of the final composites as a function of the time of ultrasonic treatment. Chosen ultrasonic treatment time was between 30 and 180 minutes. Surface area, average pore diameter and total pore volume of the final composites were measured by surface and pore size analyzer Nova 1200 Quntachrome. It was shown that the matrices with ultrasonic treatment time equal to 90 minutes have the biggest surface area 431 ± 24 m2/g. On the other had such matrices have a smaller stability in comparison with the samples with ultrasonic treatment time equal to 120 minutes that have the surface area 390 ± 21 m2/g. It was shown that the stable sols could be formed only after 120 minutes of ultrasonic treatment, otherwise the white precipitate of boehmite is formed. We conclude that the optimal ultrasonic treatment time is 120 minutes.

Keywords: boehmite matrix, stabilisation, ultrasound-assisted sol-gel synthesis

Procedia PDF Downloads 241
1352 Cognitive Dysfunctioning and the Fronto-Limbic Network in Bipolar Disorder Patients: A Fmri Meta-Analysis

Authors: Rahele Mesbah, Nic Van Der Wee, Manja Koenders, Erik Giltay, Albert Van Hemert, Max De Leeuw

Abstract:

Introduction: Patients with bipolar disorder (BD), characterized by depressive and manic episodes, often suffer from cognitive dysfunction. An up-to-date meta-analysis of functional Magnetic Resonance Imaging (fMRI) studies examining cognitive function in BD is lacking. Objective: The aim of the current fMRI meta-analysis is to investigate brain functioning of bipolar patients compared with healthy subjects within three domains of emotion processing, reward processing, and working memory. Method: Differences in brain regions activation were tested within whole-brain analysis using the activation likelihood estimation (ALE) method. Separate analyses were performed for each cognitive domain. Results: A total of 50 fMRI studies were included: 20 studies used an emotion processing (316 BD and 369 HC) task, 9 studies a reward processing task (215 BD and 213 HC), and 21 studies used a working memory task (503 BD and 445 HC). During emotion processing, BD patients hyperactivated parts of the left amygdala and hippocampus as compared to HC’s, but showed hypoactivation in the inferior frontal gyrus (IFG). Regarding reward processing, BD patients showed hyperactivation in part of the orbitofrontal cortex (OFC). During working memory, BD patients showed increased activity in the prefrontal cortex (PFC) and anterior cingulate cortex (ACC). Conclusions: This meta-analysis revealed evidence for activity disturbances in several brain areas involved in the cognitive functioning of BD patients. Furthermore, most of the found regions are part of the so-called fronto-limbic network which is hypothesized to be affected as a result of BD candidate genes' expression.

Keywords: cognitive functioning, fMRI analysis, bipolar disorder, fronto-limbic network

Procedia PDF Downloads 427
1351 The Effects of Acute Physical Activity on Measures of Inhibition in Pre-School Children

Authors: Antonia Stergiou

Abstract:

Background: Due to the developmental trajectory of executive function in preschool age, the majority of existing studies investigating the association between acute physical activity and cognitive control have focused on adolescents and adult population. Aim- The aim of this study was to investigate the possible effects of physical activity on the inhibitory control of pre-school children. Methods: This is a prospectively designed study that was conducted in a primary school in Bristol in June 2015. The total number of subjects was n=61 and 20 trials of a modified Eriksen Flanker Task were completed before and after a 30-minutes session of moderate exercise (including both 5 minutes of warm up and cool down). For each test a pre- and post-test assessment took place that included both congruent and incongruent trials. The congruent trials were considered as the control condition and the incongruent trials as those that measure inhibitory control (experimental condition). At the end of the assessment, the participants were instructed to choose the face that described their current feelings between three options (happy, neutral, sad). Results: There was a trend for increased accuracy following moderate exercise, but there was statistical significance (p > .05). However, there was statistically significant improvement in the reaction time following the same type of exercise (p = .005). Face board assessment revealed positive emotions after 30 minutes of moderate exercise. Conclusions: The current study supports findings from previous studies related to the benefits of physical activity on the children’s inhibitory control and provides evidence of those benefits in even younger ages. Further research should take place considering each child individually. Implementation of those findings could result in an improved curriculum in schools with additional time spent on physical education courses.

Keywords: cognitive control, inhibition, physical activity, pre-school children

Procedia PDF Downloads 233
1350 Towards a Scientific Intepretation of the Theory of Rasa in Indian Classical Music

Authors: Ajmal Hussain

Abstract:

In Indian music parlance, Rasa denotes a distinct aesthetic experience that builds up in the mind of the listeners while listening to a piece of Indian classical music. The distinction of the experience is rooted in the concept that it gives rise to an enhanced awareness about the Self or God and creates a mental state detached from mundane issues of everyday life. The theory of Rasa was initially proposed in the context of theatre but became a part of Indian musicological discourse roughly two thousand years ago, however, to this day, it remains shrouded in mystery due to its religious associations and connotations. This paper attempts to demystify the theory of Rasa in the light of available scientific knowledge fund particularly in Brain and Mind sciences. The paper initially describes the religious context of the theory of Rasa and then discusses its classical formulations by Bharata and Abhinavagupta including the steps and stages laid down by the latter to explain the creation of musical experience. The classical formulations are then interpreted with reference to the scientific knowledge fund about the human mind and mechanics of perception. The study uses the model of human mind as proposed by Portuguese-American neuroscientist Antonio Damasio in his theory ‘A Nesting Principle’. On the basis of the findings by Damasio, the paper interprets the experience of Rasa from a scientific perspective and clarifies the sequence of steps and stages involved in the making of musical experience. The study concludes that although the classical formulations of Rasa identify key aspects of musical experience, the association of Rasa with religion is misleading. The association with religion does not depend upon musical stimulus but the intellectual orientation of the listener. It further establishes that the function of Rasa is more profound as, from an evolutionary perspective, it can be seen as a catalyst for higher consciousness.

Keywords: aesthetic, consciousness, music, Rasa

Procedia PDF Downloads 106
1349 Optimization of Solar Rankine Cycle by Exergy Analysis and Genetic Algorithm

Authors: R. Akbari, M. A. Ehyaei, R. Shahi Shavvon

Abstract:

Nowadays, solar energy is used for energy purposes such as the use of thermal energy for domestic, industrial and power applications, as well as the conversion of the sunlight into electricity by photovoltaic cells. In this study, the thermodynamic simulation of the solar Rankin cycle with phase change material (paraffin) was first studied. Then energy and exergy analyses were performed. For optimization, a single and multi-objective genetic optimization algorithm to maximize thermal and exergy efficiency was used. The parameters discussed in this paper included the effects of input pressure on turbines, input mass flow to turbines, the surface of converters and collector angles on thermal and exergy efficiency. In the organic Rankin cycle, where solar energy is used as input energy, the fluid selection is considered as a necessary factor to achieve reliable and efficient operation. Therefore, silicon oil is selected for a high-temperature cycle and water for a low-temperature cycle as an operating fluid. The results showed that increasing the mass flow to turbines 1 and 2 would increase thermal efficiency, while it reduces and increases the exergy efficiency in turbines 1 and 2, respectively. Increasing the inlet pressure to the turbine 1 decreases the thermal and exergy efficiency, and increasing the inlet pressure to the turbine 2 increases the thermal efficiency and exergy efficiency. Also, increasing the angle of the collector increased thermal efficiency and exergy. The thermal efficiency of the system was 22.3% which improves to 33.2 and 27.2% in single-objective and multi-objective optimization, respectively. Also, the exergy efficiency of the system was 1.33% which has been improved to 1.719 and 1.529% in single-objective and multi-objective optimization, respectively. These results showed that the thermal and exergy efficiency in a single-objective optimization is greater than the multi-objective optimization.

Keywords: exergy analysis, genetic algorithm, rankine cycle, single and multi-objective function

Procedia PDF Downloads 118
1348 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 603
1347 Application of Remote Sensing and In-Situ Measurements for Discharge Monitoring in Large Rivers: Case of Pool Malebo in the Congo River Basin

Authors: Kechnit Djamel, Ammarri Abdelhadi, Raphael Tshimang, Mark Trrig

Abstract:

One of the most important aspects of monitoring rivers is navigation. The variation of discharge in the river generally produces a change in available draft for a vessel, particularly in the low flow season, which can impact the navigable water path, especially when the water depth is less than the normal one, which allows safe navigation for boats. The water depth is related to the bathymetry of the channel as well as the discharge. For a seasonal update of the navigation maps, a daily discharge value is required. Many novel approaches based on earth observation and remote sensing have been investigated for large rivers. However, it should be noted that most of these approaches are not currently able to directly estimate river discharge. This paper discusses the application of remote sensing tools using the analysis of the reflectance value of MODIS imagery and is combined with field measurements for the estimation of discharge. This approach is applied in the lower reach of the Congo River (Pool Malebo) for the period between 2019 and 2021. The correlation obtained between the observed discharge observed in the gauging station and the reflectance ratio time series is 0.81. In this context, a Discharge Reflectance Model (DRM) was developed to express discharge as a function of reflectance. This model introduces a non-contact method that allows discharge monitoring using earth observation. DRM was validated by field measurements using ADCP, in different sections on the Pool Malebo, over two different periods (dry and wet seasons), as well as by the observed discharge in the gauging station. The observed error between the estimated and measured discharge values ranges from 1 to 8% for the ADCP and from (1% to 11%) for the gauging station. The study of the uncertainties will give us the possibility to judge the robustness of the DRM.

Keywords: discharge monitoring, navigation, MODIS, empiric, ADCP, Congo River

Procedia PDF Downloads 62
1346 A Comprehensive Theory of Communication with Biological and Non-Biological Intelligence for a 21st Century Curriculum

Authors: Thomas Schalow

Abstract:

It is commonly recognized that our present curriculum is not preparing students to function in the 21st century. This is particularly true in regard to communication needs across cultures - both human and non-human. In this paper, a comprehensive theory of communication-based on communication with non-human cultures and intelligences is presented to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with the argument that we need to become much more serious about communicating with the non-human, intelligent life forms that already exists around us here on Earth. We need to broaden our definition of communication and reach out to other sentient life forms in order to provide humanity with a better perspective of its place within our ecosystem. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it could prove useful even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised in accordance with the communication theory being proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences. Humanity has never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other cultures can provide us with a framework for this communication. The basic concepts behind intercultural communication can be applied to the three types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will yield substantial gains. A curriculum that is truly ready for the 21st century needs to be aligned with this new theory of communication.

Keywords: artificial intelligence, CETI, communication, language

Procedia PDF Downloads 335
1345 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 136
1344 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 289
1343 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 191
1342 The Next Generation’s Learning Ability, Memory, as Well as Cognitive Skills Is under the Influence of Paternal Physical Activity (An Intergenerational and Trans-Generational Effect): A Systematic Review and Meta-Analysis

Authors: Parvin Goli, Amirhosein Kefayat, Rezvan Goli

Abstract:

Background: It is well established that parents can influence their offspring's neurodevelopment. It is shown that paternal environment and lifestyle is beneficial for the progeny's fitness and might affect their metabolic mechanisms; however, the effects of paternal exercise on the brain in the offspring have not been explored in detail. Objective: This study aims to review the impact of paternal physical exercise on memory and learning, neuroplasticity, as well as DNA methylation levels in the off-spring's hippocampus. Study design: In this systematic review and meta-analysis, an electronic literature search was conducted in databases including PubMed, Scopus, and Web of Science. Eligible studies were those with an experimental design, including an exercise intervention arm, with the assessment of any type of memory function, learning ability, or any type of brain plasticity as the outcome measures. Standardized mean difference (SMD) and 95% confidence intervals (CI) were computed as effect size. Results: The systematic review revealed the important role of environmental enrichment in the behavioral development of the next generation. Also, offspring of exercised fathers displayed higher levels of memory ability and lower level of brain-derived neurotrophic factor. A significant effect of paternal exercise on the hippocampal volume was also reported in the few available studies. Conclusion: These results suggest an intergenerational effect of paternal physical activity on cognitive benefit, which may be associated with hippocampal epigenetic programming in offspring. However, the biological mechanisms of this modulation remain to be determined.

Keywords: hippocampal plasticity, learning ability, memory, parental exercise

Procedia PDF Downloads 190
1341 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 114
1340 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery

Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour

Abstract:

In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.

Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics

Procedia PDF Downloads 64
1339 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 140
1338 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller

Authors: P. Valsalal, S. Thangalakshmi

Abstract:

There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.

Keywords: available line transfer capability, congestion management, FACTS device, Hybrid Fish-Bee Algorithm, ISO, UPFC

Procedia PDF Downloads 357
1337 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission

Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov

Abstract:

The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.

Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit

Procedia PDF Downloads 562
1336 A Multicriteria Framework for Assessing Energy Audit Software for Low-Income Households

Authors: Charles Amoo, Joshua New, Bill Eckman

Abstract:

Buildings in the United States account for a significant proportion of energy consumption and greenhouse gas (GHG) emissions, and this trend is expected to continue as well as rise in the near future. Low-income households, in particular, bear a disproportionate burden of high building energy consumption and spending due to high energy costs. Energy efficiency improvements need to reach an average of 4% per year in this decade in order to meet global net zero emissions target by 2050, but less than 1 % of U.S. buildings are improved each year. The government has recognized the importance of technology in addressing this issue, and energy efficiency programs have been developed to tackle the problem. The Weatherization Assistance Program (WAP), the largest residential whole-house energy efficiency program in the U.S., is specifically designed to reduce energy costs for low-income households. Under the WAP, energy auditors must follow specific audit procedures and use Department of Energy (DOE) approved energy audit tools or software. This article proposes an expanded framework of factors that should be considered in energy audit software that is approved for use in energy efficiency programs, particularly for low-income households. The framework includes more than 50 factors organized under 14 assessment criteria and can be used to qualitatively and quantitatively score different energy audit software to determine their suitability for specific energy efficiency programs. While the tool can be useful for developers to build new tools and improve existing software, as well as for energy efficiency program administrators to approve or certify tools for use, there are limitations to the model, such as the lack of flexibility that allows continuous scoring to accommodate variability and subjectivity. These limitations can be addressed by using aggregate scores of each criterion as weights that could be combined with value function and direct rating scores in a multicriteria decision analysis for a more flexible scoring.

Keywords: buildings, energy efficiency, energy audit, software

Procedia PDF Downloads 52
1335 Receptor-Independent Effects of Endocannabinoid Anandamide on Contractility and Electrophysiological Properties of Rat Ventricular Myocytes

Authors: Lina T. Al Kury, Oleg I. Voitychuk, Ramiz M. Ali, Sehamuddin Galadari, Keun-Hang Susan Yang, Frank Christopher Howarth, Yaroslav M. Shuba, Murat Oz

Abstract:

A role for anandamide (N-arachidonoyl ethanolamide; AEA), a major endocannabinoid, in the cardiovascular system in various pathological conditions has been reported in earlier studies. In the present work, we have hypothesized that the antiarrhythmic effects reported for AEA are due to its negative inotropic effect and altered action potential (AP) characteristics. Therefore, we tested the effects of AEA on contractility and electrophysiological properties of rat ventricular myocytes. Video edge detection was used to measure myocyte shortening. Intracellular Ca2+ was measured in cells loaded with the fluorescent indicator fura-2 AM. Whole-cell patch-clamp technique was employed to investigate the effect of AEA on the characteristics of APs. AEA (1 μM) caused a significant decrease in the amplitudes of electrically-evoked myocyte shortening and Ca2+ transients and significantly decreased the duration of AP. The effect of AEA on myocyte shortening and AP characteristics was not altered in the presence of pertussis toxin (PTX, 2 µg/ml for 4 h), AM251 and SR141716 (cannabinoid type 1 receptor antagonists) or AM630 and SR 144528 (cannabinoid type 2 receptor antagonists). Furthermore, AEA inhibited voltage-activated inward Na+ (INa) and Ca2+ (IL,Ca) currents; major ionic currents shaping the APs in ventricular myocytes, in a voltage and PTX-independent manner. Collectively, the results suggest that AEA depresses ventricular myocyte contractility, by decreasing the action potential duration (APD), and inhibits the function of voltage-dependent Na+ and L-type Ca2+ channels in a manner independent of cannabinoid receptors. This mechanism may be importantly involved in the antiarrhythmic effects of anandamide.

Keywords: action potential, anandamide, cannabinoid receptor, endocannabinoid, ventricular myocytes

Procedia PDF Downloads 329
1334 Analyzing Apposition and the Typology of Specific Reference in Newspaper Discourse in Nigeria

Authors: Monday Agbonica Bello Eje

Abstract:

The language of the print media is characterized by the use of apposition. This linguistic element function strategically in journalistic discourse where it is communicatively necessary to name individuals and provide information about them. Linguistic studies on the language of the print media with bias for apposition have largely dwelt on other areas but the examination of the typology of appositive reference in newspaper discourse. Yet, it is capable of revealing ways writers communicate and provide information necessary for readers to follow and understand the message. The study, therefore, analyses the patterns of appositional occurrences and the typology of reference in newspaper articles. The data were obtained from The Punch and Daily Trust Newspapers. A total of six editions of these newspapers were collected randomly spread over three months. News and feature articles were used in the analysis. Guided by the referential theory of meaning in discourse, the appositions identified were subjected to analysis. The findings show that the semantic relation of coreference and speaker coreference have the highest percentage and frequency of occurrence in the data. This is because the subject matter of news reports and feature articles focuses on humans and the events around them; as a result, readers need to be provided with some form of detail and background information in order to identify as well as follow the discourse. Also, the non-referential relation of absolute synonymy and speaker synonymy no doubt have fewer occurrences and percentages in the analysis. This is tied to a major feature of the language of the media: simplicity. The paper concludes that appositions is mainly used for the purpose of providing the reader with much detail. In this way, the writer transmits information which helps him not only to give detailed yet concise descriptions but also in some way help the reader to follow the discourse.

Keywords: apposition, discourse, newspaper, Nigeria, reference

Procedia PDF Downloads 134