Search results for: pairwise comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5073

Search results for: pairwise comparison

843 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain

Authors: Yang Liu, Dongliang Lyu

Abstract:

After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.

Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck

Procedia PDF Downloads 154
842 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly

Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota

Abstract:

Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.

Keywords: creatine, dietetic supplement, elderly, resistance training

Procedia PDF Downloads 447
841 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research

Authors: Saif Al-Qaisi, Alif Saba

Abstract:

Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.

Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics

Procedia PDF Downloads 168
840 Comparison of Cardiovascular and Metabolic Responses Following In-Water and On-Land Jump in Postmenopausal Women

Authors: Kuei-Yu Chien, Nai-Wen Kan, Wan-Chun Wu, Guo-Dong Ma, Shu-Chen Chen

Abstract:

Purpose: The purpose of this study was to investigate the responses of systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), rating of perceived exertion (RPE) and lactate following continued high-intensity interval exercise in water and on land. The results of studies can be an exercise program design reference for health care and fitness professionals. Method: A total of 20 volunteer postmenopausal women was included in this study. The inclusion criteria were: duration of menopause > 1 year; and sedentary lifestyle, defined as engaging in moderate-intensity exercise less than three times per week, or less than 20 minutes per day. Participants need to visit experimental place three times. The first time visiting, body composition was performed and participant filled out the questionnaire. Participants were assigned randomly to the exercise environment (water or land) in second and third time visiting. Water exercise testing was under water of trochanter level. In continuing jump testing, each movement consisted 10-second maximum volunteer jump for two sets. 50% heart rate reserve dynamic resting (walking or running) for one minute was within each set. SBP, DBP, HR, RPE of whole body/thigh (RPEW/RPET) and lactate were performed at pre and post testing. HR, RPEW, and RPET were monitored after 1, 2, and 10 min of exercise testing. SBP and DBP were performed after 10 and 30 min of exercise testing. Results: The responses of SBP and DBP after exercise testing in water were higher than those on land. Lactate levels after exercise testing in water were lower than those on land. The responses of RPET were lower than those on land post exercise 1 and 2 minutes. The heart rate recovery in water was faster than those on land at post exercise 5 minutes. Conclusion: This study showed water interval jump exercise induces higher cardiovascular responses with lower RPE responses and lactate levels than on-land jumps exercise in postmenopausal women. Fatigue is one of the major reasons to obstruct exercise behavior. Jump exercise could enhance cardiorespiratory fitness, the lower-extremity power, strength, and bone mass. There are several health benefits to the middle to older adults. This study showed that water interval jumping could be more relaxed and not tried to reach the same land-based cardiorespiratory exercise intensity.

Keywords: interval exercise, power, recovery, fatigue

Procedia PDF Downloads 382
839 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 171
838 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 239
837 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India

Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia

Abstract:

Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.

Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification

Procedia PDF Downloads 181
836 Effects of Lipoic Acid Supplementation on Activities of Cyclooxygenases and Levels of Prostaglandins E2 and F2 Alpha Metabolites in the Offspring of Rats with Streptozocin-Induced Diabetes

Authors: H. Y. Al-Matubsi, G. A. Oriquat, M. Abu-Samak, O. A. Al Hanbali, M. Salim

Abstract:

Background: Uncontrolled diabetes mellitus (DM) is an etiological factor for recurrent pregnancy loss and major congenital malformations in the offspring. Antioxidant therapy has been advocated to overcome the oxidant-antioxidant disequilibrium inherent in diabetes. The aims of this study were to evaluate the protective effect of lipoic acid (LA) on fetal outcome and to elucidate changes that may be involved in the mechanism(s) implicit diabetic fetopathy. Methods: Female rats were rendered hyperglycemic using streptozocin and then mated with normal male rat. Pregnant non-diabetic (group1; n=9; and group2; n=7) or pregnant diabetic (group 3; n=10; and group 4; n=8) rats were treated daily with either lipoic acid (LA) (30 mg/kg body weight; groups 2 and 4) or vehicle (groups 1 and 3) between gestational days 0 and 15. On day 15 of gestation, the rats were sacrificed, and the fetuses, placentas and membranes dissected out of the uterine horns. Following morphological examination, the fetuses, placentas and membranes were homogenized, and used to measure cyclooxygenases (COX) activities and metabolisms of prostaglandin (PG) E2 (PGEM) and PGF2 (PGFM) levels. Maternal liver and plasma total glutathione levels were also determined. Results: Supplementation of diabetic rats with LA was found to significantly (P<0.05) reduce resorption rates in diabetic rats and increased mean fetal weight compared to diabetic group. Treatment of diabetic rats with LA leads to a significant (P<0.05) increase in liver and plasma total glutathione, in comparison with diabetic rats. Decreased levels of PGEM and elevated levels of PGFM in the fetuses, placentas and membranes were characteristic of experimental diabetic gestation associated with malformation. LA treatment to diabetic mothers failed to normalize levels of PGEM to the non-diabetic control rats. However, the levels of PGEM in malformed fetuses from LA-treated diabetic mothers was significantly (P < 0.05) higher than those in malformed fetuses from diabetic rats. Conclusions: We conclude that LA can reduce congenital malformations in the offspring of diabetic rats at day 15 of gestation. However, LA treatment did not completely prevent the occurrence of malformations, other factors, such as arachidonic acid deficiency and altered prostaglandin metabolismmay be involved in the pathogenesis of diabetes-induced congenital malformations.

Keywords: diabetes, lipoic acid, pregnancy, prostaglandins

Procedia PDF Downloads 233
835 Applying Organic Natural Fertilizer to 'Orange Rubis' and 'Farbaly' Apricot Growth, Yield and Fruit Quality

Authors: A. Tarantino, F. Lops, G. Lopriore, G. Disciglio

Abstract:

Biostimulants are known as the organic fertilizers that can be applied in agriculture in order to increase nutrient uptake, growth and development of plants and improve quality, productivity and the environmental positive impacts. The aim of this study was to test the effects of some commercial biostimulants products (Bion® 50 WG, Hendophyt ® PS, Ergostim® XL and Radicon®) on vegeto-productive behavior and qualitative characteristics of fruits of two emerging apricot cultivars (Orange Rubis® and Farbaly®). The study was conducted during the spring-summer season 2015, in a commercial orchard located in the agricultural area of Cerignola (Foggia district, Apulian region, Southern Italy). Eight years old apricot trees, cv ‘Orange Rubis’ and ‘Farbaly®’, were used. The experimental data recorded during the experimental trial were: shoot length, total number of flower buds, flower buds drop and time of flowering and fruit set. Total yield of fruits per tree and quality parameters were determined. Experimental data showed some specific differences among the biostimulant treatments. Concerning the yield of ‘Orange Rubis’, except for the Bion treatment, the other three biostimulant treatments showed a tendentially lower values than the control. The yield of ‘Farbaly’ was lower for the Bion and Hendophyt treatments, higher for the Ergostim treatment, when compared with the yield of the control untreated. Concerning the soluble solids content, the juice of ‘Farbaly’ fruits had always higher content than that of ‘Orange Rubis’. Particularly, the Bion and the Hendophyt treatments showed in both harvest values tendentially higher than the control. Differently, the four biostimulant treatments did not affect significantly this parameter in ‘Orange Rubis’. With regard to the fruit firmness, some differences were observed between the two harvest dates and among the four biostimulant treatments. At the first harvest date, ‘Orange Rubis’ treated with Bion and Hendophyt biostimulants showed texture values tendentially lower than the control. Instead, ‘Farbaly’ for all the biostimulant treatments showed fruit firmness values significantly lower than the control. At the second harvest, almost all the biostimulants treatments in both ‘Orange Rubis’ and ‘Farbaly’ cultivar showed values lower than the control. Only ‘Farbaly’ treated with Radicon showed higher value in comparison to the control.

Keywords: apricot, fruit quality, growth, organic natural fertilizer

Procedia PDF Downloads 307
834 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 29
833 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 273
832 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 113
831 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence

Authors: Meghan E. Murray, Carri R. Tolmie

Abstract:

This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.

Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership

Procedia PDF Downloads 98
830 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions

Authors: Ariadne Katsouras, Andrea Chareunsy

Abstract:

The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.

Keywords: comparison, economics, environment, hydroelectric dams

Procedia PDF Downloads 166
829 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas

Authors: Rajashree Bothale, Venkateswara Rao

Abstract:

Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.

Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere

Procedia PDF Downloads 229
828 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North- West, Nigeria

Authors: Muhammad Shafi'u Adamu

Abstract:

This study examined the effects of two Group Cognitive Behavioural Therapies (Cognitive Restructuring and Rational Emotive Behavioural Therapy) on Psychological Distress of awaiting-trial Inmates in Correctional Centres in North-West, Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centres in North-west, Nigeria. 131 awaiting trial inmates from three intact Correctional Centres were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to pilot study using Cronbach's Alpha with reliability co-efficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean sores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicating the posttreatment reduction of psychological distress on the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference on those exposed to REBT. The research recommends that a standardized structured CBT counselling technique treatment should be designed for correctional centres across Nigeria, and CBT counselling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centres, group cognitive behavioural therapies, rational emotive behavioural therapy

Procedia PDF Downloads 43
827 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 412
826 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 74
825 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2

Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen

Abstract:

Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.

Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel

Procedia PDF Downloads 120
824 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 262
823 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation

Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.

Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery

Procedia PDF Downloads 80
822 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 107
821 In vivo Antidiabetic and in vitro Antioxidant Activity of Myrica salicifolia Hochst. ex A. Rich. (Myricaceae) Root Extract in Streptozotocin-Induced Diabetic Mice

Authors: Yohannes Kelifa, Gomathi Periasamy, Aman Karim

Abstract:

Introduction: Diabetes mellitus has become a major public health and economical problem across the globe. Modern antidiabetic drugs have a number of limitations, and scientific investigation of traditional herbal remedies used for diabetes may provide novel leads for the development of new antidiabetic drugs that can be used as alternative or complementary to available antidiabetic allopathic medications. Though Myrica salicifolia Hochst. ex A. Rich. is used for the management of diabetes in Ethiopian traditional medicine, there was no previous scientific evidence about its antidiabetic effect to the authors’ knowledge. This study was undertaken to evaluate the antidiabetic activity the root extracts of Myrica salicifolia in streptozotocin (STZ)-induced diabetic mice. Methods: Experimental diabetes was induced by intraperitoneal administration of STZ (150 mg/kg) in male mice. Diabetic mice were treated with oral doses of M. salicifolia root extracts at 200, 400 and 600 mg/kg, and its fractions (chloroform, ethyl acetate, n-butanol and aqueous) at a dose of 400 mg/kg daily for 15 days. Fasting blood glucose level (BGL) was measured at 0, 5th,10th, and 15th day. The free radical scavenging activity of the crude extract was determined using in vitro by DPPH assay. The statistical significance was assessed by one-way ANOVA, followed by Tukey’s multiple comparison tests. Results were considered significant when p < 0.05. Results: Daily administration of the M. salicifolia 80% methanol root extracts (at three different doses (200, 400 and 600 mg/kg) significantly (p < 0.05, p < 0.01 and p < 0.001) reduced fasting BGL compared with diabetic control. The aqueous and butanol fractions at a dose of 400 mg/kg resulted in maximum reduction of fasting BGL by 42.39%, and 52.13%, respectively at the 15th day in STZ-induced diabetic mice. Free radical scavenging activity of the 80% methanol extract of M. salicifolia was comparable to ascorbic acid. The IC50 values of the crude extract and ascorbic acid (a reference compound) were found to be 4.54 μg/ml and 4.39 μg/ml, respectively. Conclusion: These findings demonstrated that the methanolic extracts of M. salicifolia root and its fractions (n-butanol and aqueous) exhibit a significant antihyperglycemic activity in STZ-induced diabetic mice. Furthermore, the result of the present study indicates that M. salicifolia root extract is a potential source of natural antioxidants.

Keywords: antidiabetic, diabetes mellitus, DPPH, mice, Myrica salicifolia, streptozotocin

Procedia PDF Downloads 166
820 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 109
819 The Missing Link in Holistic Health Care: Value-Based Medicine in Entrustable Professional Activities for Doctor-Patient Relationship

Authors: Ling-Lang Huang

Abstract:

Background: The holistic health care should ideally cover physical, mental, spiritual, and social aspects of a patient. With very constrained time in current clinical practice system, medical decisions often tip the balance in favor of evidence-based medicine (EBM) in comparison to patient's personal values. Even in the era of competence-based medical education (CBME), when scrutinizing the items of entrustable professional activities (EPAs), we found that EPAs of establishing doctor-patient relationship remained incomplete or even missing. This phenomenon prompted us to raise this project aiming at advocating value-based medicine (VBM), which emphasizes the importance of patient’s values in medical decisions. A true and effective doctor-patient communication and relationship should be a well-balanced harmony of EBM and VBM. By constructing VBM into current EPAs, we can further promote genuine shared decision making (SDM) and fix the missing link in holistic health care. Methods: In this project, we are going to find out EPA elements crucial for establishing an ideal doctor-patient relationship through three distinct pairs of doctor-patient relationships: patients with pulmonary arterial hypertension (relatively young but with grave disease), patients undergoing surgery (facing critical medical decisions), and patients with terminal diseases (facing forthcoming death). We’ll search for important EPA elements through the following steps: 1. Narrative approach to delineate patients’ values among 2. distinct groups. 3.Hermeneutics-based interview: semi-structured interview will be conducted for both patients and physicians, followed by qualitative analysis of collected information by compiling, disassembling, reassembling, interpreting, and concluding. 4. Preliminarily construct those VBM elements into EPAs for doctor-patient relationships in 3 groups. Expected Outcomes: The results of this project are going to give us invaluable information regarding the impact of patients’ values, while facing different medical situations, on the final medical decision. The competence of well-blending and -balanced both values from patients and evidence from clinical sciences is the missing link in holistic health care and should be established in future EPAs to enhance an effective SDM.

Keywords: value-based medicine, shared decision making, entrustable professional activities, holistic health care

Procedia PDF Downloads 89
818 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites

Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze

Abstract:

Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.

Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering

Procedia PDF Downloads 343
817 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS

Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh

Abstract:

A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.

Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials

Procedia PDF Downloads 239
816 Usage of Cyanobacteria in Battery: Saving Money, Enhancing the Storage Capacity, Making Portable, and Supporting the Ecology

Authors: Saddam Husain Dhobi, Bikrant Karki

Abstract:

The main objective of this paper is save money, balance ecosystem of the terrestrial organism, control global warming, and enhancing the storage capacity of the battery with requiring weight and thinness by using Cyanobacteria in the battery. To fulfill this purpose of paper we can use different methods: Analysis, Biological, Chemistry, theoretical and Physics with some engineering design. Using this different method, we can produce the special type of battery that has the long life, high storage capacity, and clean environment, save money so on and by using the byproduct of Cyanobacteria i.e. glucose. Cyanobacteria are a special type of bacteria that produces different types of extracellular glucoses and oxygen with the help of little sunlight, water, and carbon dioxide and can survive in freshwater, marine and in the land as well. In this process, O₂ is more in the comparison to plant due to rapid growth rate of Cyanobacteria. The required materials are easily available in this process to produce glucose with the help of Cyanobacteria. Since CO₂, is greenhouse gas that causes the global warming? We can utilize this gas and save our ecological balance and the byproduct (glucose) C₆H₁₂O₆ can be utilized for raw material for the battery where as O₂ escape is utilized by living organism. The glucose produce by Cyanobateria goes on Krebs's Cycle or Citric Acid Cycle, in which glucose is complete, oxidizes and all the available energy from glucose molecule has been release in the form of electron and proton as energy. If we use a suitable anodes and cathodes, we can capture these electrons and protons to produce require electricity current with the help of byproduct of Cyanobacteria. According to "Virginia Tech Bio-battery" and "Sony" 13 enzymes and the air is used to produce nearly 24 electrons from a single glucose unit. In this output power of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. When we use Cyanobacteria in battery, we are able to reduce Carbon dioxide, Stop global warming, and enhancing the storage capacity of battery more than 10 times that of lithium battery, saving money, balancing ecology. In this way, we can produce energy from the Cyanobacteria and use it in battery for different benefits. In addition, due to the mass, size and easy cultivation, they are better to maintain the size of battery. Hence, we can use Cyanobacteria for the battery having suitable size, enhancing the storing capacity of battery, helping the environment, portability and so on.

Keywords: anode, byproduct, cathode, cyanobacteri, glucose, storage capacity

Procedia PDF Downloads 313
815 Influence of the Location of Flood Embankments on the Condition of Oxbow Lakes and Riparian Forests: A Case Study of the Middle Odra River Beds on the Example of Dragonflies (Odonata), Ground Beetles (Coleoptera: Carabidae) and Plant Communities

Authors: Magda Gorczyca, Zofia Nocoń

Abstract:

Past and current studies from different countries showed that river engineering leads to environmental degradation and extinction of many species - often those protected by local and international wildlife conservation laws. Through the years, the main focus of rivers utilization has shifted from industrial applications to recreation and wildlife preservation with a focus on keeping the biodiversity which plays a significant role in preventing climate changes. Thus an opportunity appeared to recreate flooding areas and natural habitats, which are very rare in the scale of Europe. Additionally, river restoration helps to avoid floodings and periodic droughts, which are usually very damaging to the economy. In this research, the biodiversity of dragonflies and ground beetles was analyzed in the context of plant communities and forest stands structure. Results were enriched with data from past and current literature. A comparison was made between two parts of the Odra river. A part where oxbow lake and riparian forest were separated from the river bed by embankment and a part of the river with floodplains left intact. Validity assessment of embankments relocation was made based on the research results. In the period between May and September, insects were collected, phytosociological analysis were taken, and forest stand structure properties were specified. In the part of the river not separated by the embankments, rare and protected species of plants were spotted (e.g., Trapanatans, Salvinianatans) as well as greater species and quantitive diversity of dragonfly. Ground beetles fauna, though, was richer in the area separated by the embankment. Even though the research was done during only one season and in a limited area, the results can be a starting point for further extended research and may contribute to acquiring legal wildlife protection and restoration of the researched area. During the research, the presence of invasive species Impatiens parviflora, Echinocystislobata, and Procyonlotor were observed, which may lead to loss of the natural values of the researched areas.

Keywords: carabidae, floodplains, middle Odra river, Odonata, oxbow lakes, riparian forests

Procedia PDF Downloads 122
814 The Analysis of the Influence of Islamic Religiosity on Tax Morale among Self-Employed Taxpayers in Indonesia

Authors: Nurul Hidayat

Abstract:

Based on the data from the Indonesian Tax Authority, the contribution of self-employed taxpayers in Indonesia is just approximately 1-2 percent of total tax revenues during 2013 - 2015. This phenomenon requires greater attention to understand what factors that may affect it. The fact that Indonesia has the most prominent Muslim population in the world makes it important to analyze whether there potentially exists a correlation between Islamic religiosity and low tax contribution. The low level of tax contribution may provide an initial indication of low tax morale and tax compliance. This study will extend the existing literature by investigating the influence of Islamic religiosity as a moderating effect on the relationship between the perceptions of government legitimacy and tax morale among self-employed taxpayers. There are some factors to consider when taking into account the issue of Islamic religiosity and its relationship with tax morale in this study. Firstly, in Islam, there is a debate surrounding the lawfulness of tax. Some argue that Muslims should not have to pay tax; while others argue that the imposition of the tax is legitimate in circumstances. These views may have an impact on government legitimacy and tax morale. Secondly, according to Islamic sharia, Islam recognizes another compulsory payment, i.e. zakat, which to some extent has similar characteristics to tax. According to Indonesian Income Tax Law, zakat payment has just been accommodated as a deduction from taxable income. As a comparison, Malaysia treats zakat as a tax rebate. The treatment of zakat only as a taxable income deduction may also lead to a conflicting issue regarding the perception of tax fairness that possibly erode the perception of government legitimacy and tax morale. Based on the considerations above, perceptions of government legitimacy become important to influence the willingness of people to pay tax while the level of Islamic religiosity has a potential moderator effect on that correlation. In terms of measuring the relationship among the variables, this study utilizes mixed-quantitative and qualitative methods. The quantitative methods use surveys to approximately 400 targeted taxpayers while the qualitative methods employ in-depth interviews with 12 people, consist of experts, Islamic leaders and selected taxpayers. In particular, the research is being conducted in Indonesia, the country with the largest Muslim population in the world which has not fully implemented Islamic law as state law. The result indicates that Islamic religiosity becomes a moderating effect on the way taxpayers perceived government legitimacy that finally influences on tax morale. The findings of this study are supportive for the improvement of tax regulations by specifically considering tax deductions for zakat.

Keywords: Islamic religiosity, tax morale, government legitimacy, zakat

Procedia PDF Downloads 203