Search results for: cosmopolitan city comparison
1048 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 2251047 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain
Authors: Yang Liu, Dongliang Lyu
Abstract:
After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck
Procedia PDF Downloads 1891046 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly
Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota
Abstract:
Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.Keywords: creatine, dietetic supplement, elderly, resistance training
Procedia PDF Downloads 4741045 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research
Authors: Saif Al-Qaisi, Alif Saba
Abstract:
Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics
Procedia PDF Downloads 1931044 Civic E-Participation in Central and Eastern Europe: A Comparative Analysis
Authors: Izabela Kapsa
Abstract:
Civic participation is an important aspect of democracy. The contemporary model of democracy is based on citizens' participation in political decision-making (deliberative democracy, participatory democracy). This participation takes many forms of activities like display of slogans and symbols, voting, social consultations, political demonstrations, membership in political parties or organizing civil disobedience. The countries of Central and Eastern Europe after 1989 are characterized by great social, economic and political diversity. Civil society is also part of the process of democratization. Civil society, funded by the rule of law, civil rights, such as freedom of speech and association and private ownership, was to play a central role in the development of liberal democracy. Among the many interpretations of concepts, defining the concept of contemporary democracy, one can assume that the terms civil society and democracy, although different in meaning, nowadays overlap. In the post-communist countries, the process of shaping and maturing societies took place in the context of a struggle with a state governed by undemocratic power. State fraud or repudiation of the institution is a representative state, which in the past was the only way to manifest and defend its identity, but after the breakthrough became one of the main obstacles to the development of civil society. In Central and Eastern Europe, there are many obstacles to the development of civil society, for example, the elimination of economic poverty, the implementation of educational campaigns, consciousness-related obstacles, the formation of social capital and the deficit of social activity. Obviously, civil society does not only entail an electoral turnout but a broader participation in the decision-making process, which is impossible without direct and participative democratic institutions. This article considers such broad forms of civic participation and their characteristics in Central and Eastern Europe. The paper is attempts to analyze the functioning of electronic forms of civic participation in Central and Eastern European states. This is not accompanied by a referendum or a referendum initiative, and other forms of political participation, such as public consultations, participative budgets, or e-Government. However, this paper will broadly present electronic administration tools, the application of which results from both legal regulations and increasingly common practice in state and city management. In the comparative analysis, the experiences of post-communist bloc countries will be summed up to indicate the challenges and possible goals for further development of this form of citizen participation in the political process. The author argues that for to function efficiently and effectively, states need to involve their citizens in the political decision-making process, especially with the use of electronic tools.Keywords: Central and Eastern Europe, e-participation, e-government, post-communism
Procedia PDF Downloads 1931043 Comparison of Cardiovascular and Metabolic Responses Following In-Water and On-Land Jump in Postmenopausal Women
Authors: Kuei-Yu Chien, Nai-Wen Kan, Wan-Chun Wu, Guo-Dong Ma, Shu-Chen Chen
Abstract:
Purpose: The purpose of this study was to investigate the responses of systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), rating of perceived exertion (RPE) and lactate following continued high-intensity interval exercise in water and on land. The results of studies can be an exercise program design reference for health care and fitness professionals. Method: A total of 20 volunteer postmenopausal women was included in this study. The inclusion criteria were: duration of menopause > 1 year; and sedentary lifestyle, defined as engaging in moderate-intensity exercise less than three times per week, or less than 20 minutes per day. Participants need to visit experimental place three times. The first time visiting, body composition was performed and participant filled out the questionnaire. Participants were assigned randomly to the exercise environment (water or land) in second and third time visiting. Water exercise testing was under water of trochanter level. In continuing jump testing, each movement consisted 10-second maximum volunteer jump for two sets. 50% heart rate reserve dynamic resting (walking or running) for one minute was within each set. SBP, DBP, HR, RPE of whole body/thigh (RPEW/RPET) and lactate were performed at pre and post testing. HR, RPEW, and RPET were monitored after 1, 2, and 10 min of exercise testing. SBP and DBP were performed after 10 and 30 min of exercise testing. Results: The responses of SBP and DBP after exercise testing in water were higher than those on land. Lactate levels after exercise testing in water were lower than those on land. The responses of RPET were lower than those on land post exercise 1 and 2 minutes. The heart rate recovery in water was faster than those on land at post exercise 5 minutes. Conclusion: This study showed water interval jump exercise induces higher cardiovascular responses with lower RPE responses and lactate levels than on-land jumps exercise in postmenopausal women. Fatigue is one of the major reasons to obstruct exercise behavior. Jump exercise could enhance cardiorespiratory fitness, the lower-extremity power, strength, and bone mass. There are several health benefits to the middle to older adults. This study showed that water interval jumping could be more relaxed and not tried to reach the same land-based cardiorespiratory exercise intensity.Keywords: interval exercise, power, recovery, fatigue
Procedia PDF Downloads 4081042 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 2121041 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 6471040 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 2711039 Effects of Lipoic Acid Supplementation on Activities of Cyclooxygenases and Levels of Prostaglandins E2 and F2 Alpha Metabolites in the Offspring of Rats with Streptozocin-Induced Diabetes
Authors: H. Y. Al-Matubsi, G. A. Oriquat, M. Abu-Samak, O. A. Al Hanbali, M. Salim
Abstract:
Background: Uncontrolled diabetes mellitus (DM) is an etiological factor for recurrent pregnancy loss and major congenital malformations in the offspring. Antioxidant therapy has been advocated to overcome the oxidant-antioxidant disequilibrium inherent in diabetes. The aims of this study were to evaluate the protective effect of lipoic acid (LA) on fetal outcome and to elucidate changes that may be involved in the mechanism(s) implicit diabetic fetopathy. Methods: Female rats were rendered hyperglycemic using streptozocin and then mated with normal male rat. Pregnant non-diabetic (group1; n=9; and group2; n=7) or pregnant diabetic (group 3; n=10; and group 4; n=8) rats were treated daily with either lipoic acid (LA) (30 mg/kg body weight; groups 2 and 4) or vehicle (groups 1 and 3) between gestational days 0 and 15. On day 15 of gestation, the rats were sacrificed, and the fetuses, placentas and membranes dissected out of the uterine horns. Following morphological examination, the fetuses, placentas and membranes were homogenized, and used to measure cyclooxygenases (COX) activities and metabolisms of prostaglandin (PG) E2 (PGEM) and PGF2 (PGFM) levels. Maternal liver and plasma total glutathione levels were also determined. Results: Supplementation of diabetic rats with LA was found to significantly (P<0.05) reduce resorption rates in diabetic rats and increased mean fetal weight compared to diabetic group. Treatment of diabetic rats with LA leads to a significant (P<0.05) increase in liver and plasma total glutathione, in comparison with diabetic rats. Decreased levels of PGEM and elevated levels of PGFM in the fetuses, placentas and membranes were characteristic of experimental diabetic gestation associated with malformation. LA treatment to diabetic mothers failed to normalize levels of PGEM to the non-diabetic control rats. However, the levels of PGEM in malformed fetuses from LA-treated diabetic mothers was significantly (P < 0.05) higher than those in malformed fetuses from diabetic rats. Conclusions: We conclude that LA can reduce congenital malformations in the offspring of diabetic rats at day 15 of gestation. However, LA treatment did not completely prevent the occurrence of malformations, other factors, such as arachidonic acid deficiency and altered prostaglandin metabolismmay be involved in the pathogenesis of diabetes-induced congenital malformations.Keywords: diabetes, lipoic acid, pregnancy, prostaglandins
Procedia PDF Downloads 2621038 Applying Organic Natural Fertilizer to 'Orange Rubis' and 'Farbaly' Apricot Growth, Yield and Fruit Quality
Authors: A. Tarantino, F. Lops, G. Lopriore, G. Disciglio
Abstract:
Biostimulants are known as the organic fertilizers that can be applied in agriculture in order to increase nutrient uptake, growth and development of plants and improve quality, productivity and the environmental positive impacts. The aim of this study was to test the effects of some commercial biostimulants products (Bion® 50 WG, Hendophyt ® PS, Ergostim® XL and Radicon®) on vegeto-productive behavior and qualitative characteristics of fruits of two emerging apricot cultivars (Orange Rubis® and Farbaly®). The study was conducted during the spring-summer season 2015, in a commercial orchard located in the agricultural area of Cerignola (Foggia district, Apulian region, Southern Italy). Eight years old apricot trees, cv ‘Orange Rubis’ and ‘Farbaly®’, were used. The experimental data recorded during the experimental trial were: shoot length, total number of flower buds, flower buds drop and time of flowering and fruit set. Total yield of fruits per tree and quality parameters were determined. Experimental data showed some specific differences among the biostimulant treatments. Concerning the yield of ‘Orange Rubis’, except for the Bion treatment, the other three biostimulant treatments showed a tendentially lower values than the control. The yield of ‘Farbaly’ was lower for the Bion and Hendophyt treatments, higher for the Ergostim treatment, when compared with the yield of the control untreated. Concerning the soluble solids content, the juice of ‘Farbaly’ fruits had always higher content than that of ‘Orange Rubis’. Particularly, the Bion and the Hendophyt treatments showed in both harvest values tendentially higher than the control. Differently, the four biostimulant treatments did not affect significantly this parameter in ‘Orange Rubis’. With regard to the fruit firmness, some differences were observed between the two harvest dates and among the four biostimulant treatments. At the first harvest date, ‘Orange Rubis’ treated with Bion and Hendophyt biostimulants showed texture values tendentially lower than the control. Instead, ‘Farbaly’ for all the biostimulant treatments showed fruit firmness values significantly lower than the control. At the second harvest, almost all the biostimulants treatments in both ‘Orange Rubis’ and ‘Farbaly’ cultivar showed values lower than the control. Only ‘Farbaly’ treated with Radicon showed higher value in comparison to the control.Keywords: apricot, fruit quality, growth, organic natural fertilizer
Procedia PDF Downloads 3261037 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 631036 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 2991035 The Effect of Filter Design and Face Velocity on Air Filter Performance
Authors: Iyad Al-Attar
Abstract:
Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop
Procedia PDF Downloads 1351034 The Church of San Paolo in Ferrara, Restoration and Accessibility
Authors: Benedetta Caglioti
Abstract:
The ecclesiastical complex of San Paolo in Ferrara represents a monument of great historical, religious and architectural importance. Its long and articulated story, over time, is already manifested by the mere reading of its planimetric and altimetric configuration, apparently unitary but, in reality, marked by modifications and repeated additions, even of high quality. It follows, in terms of protection, restoration and enhancement, a commitment of due respect for how the ancient building was built and enriched over its centuries of life. Hence a rigorous methodological approach, while being aware of the fact that every monument, in order to live and make use of the indispensable maintenance, must always be enjoyed and visited, therefore it must enjoy, in the right measure and compatibly with its nature, the possibility of improvements and functional, distributive, technological adjustments and related to the safety of people and things. The methodological approach substantiates the different elements of the project (such as distribution functionality, safety, structural solidity, environmental comfort, the character of the site, building and urban planning regulations, financial resources and materials, the same organization methods of the construction site) through the guiding principles of restoration, defined for a long time: the 'minimum intervention,' the 'recognisability' or 'distinguishability' of old and new, the Physico-chemical and figurative 'compatibility,' the 'durability' and the, at least potential, 'reversibility' of what is done, leading to the definition of appropriate "critical choices." The project tackles, together with the strictly functional ones, also the directly conservative and restoration issues, of a static, structural and material technology nature, with special attention to precious architectural surfaces, In order to ensure the best architectural quality through conscious enhancement, the project involves a redistribution of the interior and service spaces, an accurate lighting system inside and outside the church and a reorganization of the adjacent urban space. The reorganization of the interior is designed with particular attention to the issue of accessibility for people with disabilities. To accompany the community to regain possession of the use of the church's own space, already in its construction phase, the project proposal has hypothesized a permeability and flexibility in the management of the works such as to allow the perception of the found Monument to gradually become more and more familiar at the citizenship. Once the interventions have been completed, it is expected that the Church of San Paolo, second in importance only to the Cathedral, from which it is a few steps away, will be inserted in an already existing circuit of use of the city which over the years has systematized the different aspects of culture, the environment and tourism for the creation of greater awareness in the perception of what Ferrara can offer in cultural terms.Keywords: conservation, accessibility, regeneration, urban space
Procedia PDF Downloads 1081033 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence
Authors: Meghan E. Murray, Carri R. Tolmie
Abstract:
This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership
Procedia PDF Downloads 1271032 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions
Authors: Ariadne Katsouras, Andrea Chareunsy
Abstract:
The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.Keywords: comparison, economics, environment, hydroelectric dams
Procedia PDF Downloads 1971031 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas
Authors: Rajashree Bothale, Venkateswara Rao
Abstract:
Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere
Procedia PDF Downloads 2601030 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North- West, Nigeria
Authors: Muhammad Shafi'u Adamu
Abstract:
This study examined the effects of two Group Cognitive Behavioural Therapies (Cognitive Restructuring and Rational Emotive Behavioural Therapy) on Psychological Distress of awaiting-trial Inmates in Correctional Centres in North-West, Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centres in North-west, Nigeria. 131 awaiting trial inmates from three intact Correctional Centres were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to pilot study using Cronbach's Alpha with reliability co-efficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean sores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicating the posttreatment reduction of psychological distress on the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference on those exposed to REBT. The research recommends that a standardized structured CBT counselling technique treatment should be designed for correctional centres across Nigeria, and CBT counselling techniques could be used in the treatment of PD in both correctional and clinical settings.Keywords: awaiting-trial inmates, cognitive restructuring, correctional centres, group cognitive behavioural therapies, rational emotive behavioural therapy
Procedia PDF Downloads 881029 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs
Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello
Abstract:
MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction
Procedia PDF Downloads 4511028 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 1041027 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2
Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen
Abstract:
Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel
Procedia PDF Downloads 1481026 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 5271025 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes
Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi
Abstract:
Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation
Procedia PDF Downloads 2921024 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 3281023 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 1101022 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 1411021 In vivo Antidiabetic and in vitro Antioxidant Activity of Myrica salicifolia Hochst. ex A. Rich. (Myricaceae) Root Extract in Streptozotocin-Induced Diabetic Mice
Authors: Yohannes Kelifa, Gomathi Periasamy, Aman Karim
Abstract:
Introduction: Diabetes mellitus has become a major public health and economical problem across the globe. Modern antidiabetic drugs have a number of limitations, and scientific investigation of traditional herbal remedies used for diabetes may provide novel leads for the development of new antidiabetic drugs that can be used as alternative or complementary to available antidiabetic allopathic medications. Though Myrica salicifolia Hochst. ex A. Rich. is used for the management of diabetes in Ethiopian traditional medicine, there was no previous scientific evidence about its antidiabetic effect to the authors’ knowledge. This study was undertaken to evaluate the antidiabetic activity the root extracts of Myrica salicifolia in streptozotocin (STZ)-induced diabetic mice. Methods: Experimental diabetes was induced by intraperitoneal administration of STZ (150 mg/kg) in male mice. Diabetic mice were treated with oral doses of M. salicifolia root extracts at 200, 400 and 600 mg/kg, and its fractions (chloroform, ethyl acetate, n-butanol and aqueous) at a dose of 400 mg/kg daily for 15 days. Fasting blood glucose level (BGL) was measured at 0, 5th,10th, and 15th day. The free radical scavenging activity of the crude extract was determined using in vitro by DPPH assay. The statistical significance was assessed by one-way ANOVA, followed by Tukey’s multiple comparison tests. Results were considered significant when p < 0.05. Results: Daily administration of the M. salicifolia 80% methanol root extracts (at three different doses (200, 400 and 600 mg/kg) significantly (p < 0.05, p < 0.01 and p < 0.001) reduced fasting BGL compared with diabetic control. The aqueous and butanol fractions at a dose of 400 mg/kg resulted in maximum reduction of fasting BGL by 42.39%, and 52.13%, respectively at the 15th day in STZ-induced diabetic mice. Free radical scavenging activity of the 80% methanol extract of M. salicifolia was comparable to ascorbic acid. The IC50 values of the crude extract and ascorbic acid (a reference compound) were found to be 4.54 μg/ml and 4.39 μg/ml, respectively. Conclusion: These findings demonstrated that the methanolic extracts of M. salicifolia root and its fractions (n-butanol and aqueous) exhibit a significant antihyperglycemic activity in STZ-induced diabetic mice. Furthermore, the result of the present study indicates that M. salicifolia root extract is a potential source of natural antioxidants.Keywords: antidiabetic, diabetes mellitus, DPPH, mice, Myrica salicifolia, streptozotocin
Procedia PDF Downloads 1961020 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1351019 The Missing Link in Holistic Health Care: Value-Based Medicine in Entrustable Professional Activities for Doctor-Patient Relationship
Authors: Ling-Lang Huang
Abstract:
Background: The holistic health care should ideally cover physical, mental, spiritual, and social aspects of a patient. With very constrained time in current clinical practice system, medical decisions often tip the balance in favor of evidence-based medicine (EBM) in comparison to patient's personal values. Even in the era of competence-based medical education (CBME), when scrutinizing the items of entrustable professional activities (EPAs), we found that EPAs of establishing doctor-patient relationship remained incomplete or even missing. This phenomenon prompted us to raise this project aiming at advocating value-based medicine (VBM), which emphasizes the importance of patient’s values in medical decisions. A true and effective doctor-patient communication and relationship should be a well-balanced harmony of EBM and VBM. By constructing VBM into current EPAs, we can further promote genuine shared decision making (SDM) and fix the missing link in holistic health care. Methods: In this project, we are going to find out EPA elements crucial for establishing an ideal doctor-patient relationship through three distinct pairs of doctor-patient relationships: patients with pulmonary arterial hypertension (relatively young but with grave disease), patients undergoing surgery (facing critical medical decisions), and patients with terminal diseases (facing forthcoming death). We’ll search for important EPA elements through the following steps: 1. Narrative approach to delineate patients’ values among 2. distinct groups. 3.Hermeneutics-based interview: semi-structured interview will be conducted for both patients and physicians, followed by qualitative analysis of collected information by compiling, disassembling, reassembling, interpreting, and concluding. 4. Preliminarily construct those VBM elements into EPAs for doctor-patient relationships in 3 groups. Expected Outcomes: The results of this project are going to give us invaluable information regarding the impact of patients’ values, while facing different medical situations, on the final medical decision. The competence of well-blending and -balanced both values from patients and evidence from clinical sciences is the missing link in holistic health care and should be established in future EPAs to enhance an effective SDM.Keywords: value-based medicine, shared decision making, entrustable professional activities, holistic health care
Procedia PDF Downloads 121