Search results for: main thread
1774 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs
Authors: S. Lertsakulpasuk, S. Athichanagorn
Abstract:
As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs
Procedia PDF Downloads 4431773 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 881772 Hydrothermal Aging Behavior of Continuous Carbon Fiber Reinforced Polyamide 6 Composites
Authors: Jifeng Zhang , Yongpeng Lei
Abstract:
Continuous carbon fiber reinforced polyamide 6 (CF/PA6) composites are potential for application in the automotive industry due to their high specific strength and stiffness. However, PA6 resin is sensitive to the moisture in the hydrothermal environment and CF/PA6 composites might undergo several physical and chemical changes, such as plasticization, swelling, and hydrolysis, which induces a reduction of mechanical properties. So far, little research has been reported on the assessment of the effects of hydrothermal aging on the mechanical properties of continuous CF/PA6 composite. This study deals with the effects of hydrothermal aging on moisture absorption and mechanical properties of polyamide 6 (PA6) and polyamide 6 reinforced with continuous carbon fibers composites (CF/PA6) by immersion in distilled water at 30 ℃, 50 ℃, 70 ℃, and 90 ℃. Degradation of mechanical performance has been monitored, depending on the water absorption content and the aging temperature. The experimental results reveal that under the same aging condition, the PA6 resin absorbs more water than the CF/PA6 composite, while the water diffusion coefficient of CF/PA6 composite is higher than that of PA6 resin because of interfacial diffusion channel. In mechanical properties degradation process, an exponential reduction in tensile strength and elastic modulus are observed in PA6 resin as aging temperature and water absorption content increases. The degradation trend of flexural properties of CF/PA6 is the same as that of tensile properties of PA6 resin. Moreover, the water content plays a decisive role in mechanical degradation compared with aging temperature. In contrast, hydrothermal environment has mild effect on the tensile properties of CF/PA6 composites. The elongation at breakage of PA6 resin and CF/PA6 reaches the highest value when their water content reaches 6% and 4%, respectively. Dynamic mechanical analysis (DMA) and scanning electron microscope (SEM) were also used to explain the mechanism of mechanical properties alteration. After exposed to the hydrothermal environment, the Tg (glass transition temperature) of samples decreases dramatically with water content increase. This reduction can be ascribed to the plasticization effect of water. For the unaged specimens, the fibers surface is coated with resin and the main fracture mode is fiber breakage, indicating that a good adhesion between fiber and matrix. However, with absorbed water content increasing, the fracture mode transforms to fiber pullout. Finally, based on Arrhenius methodology, a predictive model with relate to the temperature and water content has been presented to estimate the retention of mechanical properties for PA6 and CF/PA6.Keywords: continuous carbon fiber reinforced polyamide 6 composite, hydrothermal aging, Arrhenius methodology, interface
Procedia PDF Downloads 1201771 In silico Statistical Prediction Models for Identifying the Microbial Diversity and Interactions Due to Fixed Periodontal Appliances
Authors: Suganya Chandrababu, Dhundy Bastola
Abstract:
Like in the gut, the subgingival microbiota plays a crucial role in oral hygiene, health, and cariogenic diseases. Human activities like diet, antibiotics, and periodontal treatments alter the bacterial communities, metabolism, and functions in the oral cavity, leading to a dysbiotic state and changes in the plaques of orthodontic patients. Fixed periodontal appliances hinder oral hygiene and cause changes in the dental plaques influencing the subgingival microbiota. However, the microbial species’ diversity and complexity pose a great challenge in understanding the taxa’s community distribution patterns and their role in oral health. In this research, we analyze the subgingival microbial samples from individuals with fixed dental appliances (metal/clear) using an in silico approach. We employ exploratory hypothesis-driven multivariate and regression analysis to shed light on the microbial community and its functional fluctuations due to dental appliances used and identify risks associated with complex disease phenotypes. Our findings confirm the changes in oral microbiota composition due to the presence and type of fixed orthodontal devices. We identified seven main periodontic pathogens, including Bacteroidetes, Actinobacteria, Proteobacteria, Fusobacteria, and Firmicutes, whose abundances were significantly altered due to the presence and type of fixed appliances used. In the case of metal braces, the abundances of Bacteroidetes, Proteobacteria, Fusobacteria, Candidatus saccharibacteria, and Spirochaetes significantly increased, while the abundance of Firmicutes and Actinobacteria decreased. However, in individuals With clear braces, the abundance of Bacteroidetes and Candidatus saccharibacteria increased. The highest abundance value (P-value=0.004 < 0.05) was observed with Bacteroidetes in individuals with the metal appliance, which is associated with gingivitis, periodontitis, endodontic infections, and odontogenic abscesses. Overall, the bacterial abundances decrease with clear type and increase with metal type of braces. Regression analysis further validated the multivariate analysis of variance (MANOVA) results, supporting the hypothesis that the presence and type of the fixed oral appliances significantly alter the bacterial abundance and composition.Keywords: oral microbiota, statistical analysis, fixed or-thodontal appliances, bacterial abundance, multivariate analysis, regression analysis
Procedia PDF Downloads 1921770 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 1921769 Numerical Board Game for Low-Income Preschoolers
Authors: Gozde Inal Kiziltepe, Ozgun Uyanik
Abstract:
There is growing evidence that socioeconomic (SES)-related differences in mathematical knowledge primarily start in early childhood period. Preschoolers from low-income families are likely to perform substantially worse in mathematical knowledge than their counterparts from middle and higher income families. The differences are seen on a wide range of recognizing written numerals, counting, adding and subtracting, and comparing numerical magnitudes. Early differences in numerical knowledge have a permanent effect childrens’ mathematical knowledge in other grades. In this respect, analyzing the effect of number board game on the number knowledge of 48-60 month-old children from disadvantaged low-income families constitutes the main objective of the study. Participants were the 71 preschoolers from a childcare center which served low-income urban families. Children were randomly assigned to the number board condition or to the color board condition. The number board condition included 35 children and the color board game condition included 36 children. Both board games were 50 cm long and 30 cm high; had ‘The Great Race’ written across the top; and included 11 horizontally arranged, different colored squares of equal sizes with the leftmost square labeled ‘Start’. The numerical board had the numbers 1–10 in the rightmost 10 squares; the color board had different colors in those squares. A rabbit or a bear token were presented to children for selecting, and on each trial spun a spinner to determine whether the token would move one or two spaces. The number condition spinner had a ‘1’ half and a ‘2’ half; the color condition spinner had colors that matched the colors of the squares on the board. Children met one-on-one with an experimenter for four 15- to 20-min sessions within a 2-week period. In the first and fourth sessions, children were administered identical pretest and posttest measures of numerical knowledge. All children were presented three numerical tasks and one subtest presented in the following order: counting, numerical magnitude comparison, numerical identification and Count Objects – Circle Number Probe subtest of Early Numeracy Assessment. In addition, same numerical tasks and subtest were given as a follow-up test four weeks after the post-test administration. Findings obtained from the study; showed that there was a meaningful difference between scores of children who played a color board game in favor of children who played number board game.Keywords: low income, numerical board game, numerical knowledge, preschool education
Procedia PDF Downloads 3521768 The Creation of Calcium Phosphate Coating on Nitinol Substrate
Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova
Abstract:
NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.
Procedia PDF Downloads 701767 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 1251766 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 1911765 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain
Authors: Yang Liu, Dongliang Lyu
Abstract:
After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck
Procedia PDF Downloads 1871764 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly
Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota
Abstract:
Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.Keywords: creatine, dietetic supplement, elderly, resistance training
Procedia PDF Downloads 4721763 Change Detection of Water Bodies in Dhaka City: An Analysis Using Geographic Information System and Remote Sensing
Authors: M. Humayun Kabir, Mahamuda Afroze, K. Maudood Elahi
Abstract:
Since the late 1900s, unplanned and rapid urbanization processes have drastically altered the land, reduced water bodies, and decreased vegetation cover in the capital city of Bangladesh, Dhaka. The capitalist modes of urbanization results in the encroachment of the surface water bodies in this city. The main goal of this study is to investigate the change detection of water bodies in Dhaka city, analyzing spatial distribution of water bodies and calculating the changing rate of it. This effort aims to influence public policy for environmental justice initiatives around protecting water bodies for ensuring proper function of the urban ecosystem. This study accomplishes research goal by compiling satellite imageries into GIS software to understand the changes of water bodies in Dhaka city. The work focuses on the late 20th century to early 21st century to analyze this city before and after major infrastructural changes occurred in unplanned manner. The land use of the study area has been classified into four categories, and the areas of the different land use have been calculated using MS Excel and SPSS. The results reveal that the urbanization expanded from central to northern part and major encroachment occurred at the western and eastern part of the city. It has also been found that, in 1988, the total area of water bodies was 8935.38 hectares, and it gradually decreased, and in 1998, 2008, 2017, the total areas of water bodies reached 6065.73, 4853.32, 2077.56 hectares, respectively. Rapid population growth, unplanned urbanization, and industrialization have generated pressure to change the land use pattern in Dhaka city. These expansion processes are engulfing wetland, water bodies, and vegetation cover without considering environmental impact. In order to regain the wetland and surface water bodies, the concern authorities must implement laws and act as a legal instrument in this regard and take action against the violators of it. This research is the synthesis of time series data that provides a complete picture of the water body’s status of Dhaka city that might help to make plans and policies for water body conservation.Keywords: ecosystem, GIS, industrialization, land use, remote sensing, urbanization
Procedia PDF Downloads 1511762 Feminine Gender Identity in Nigerian Music Education: Trends, Challenges and Prospects
Authors: Julius Oluwayomi Oluwadamilare, Michael Olutayo Olatunji
Abstract:
In the African traditional societies, women have always played the role of a teacher, albeit informally. This is evident in the upbringing of their babies. As mothers, they also serve as the first teachers to teach their wards lessons through day-to-day activities. Furthermore, women always play the role of a musician during naming ceremonies, in the singing of lullabies, during initiation rites of adolescent boys and girls into adulthood, and in preparing their children especially daughters (and sons) for marriage. They also perform this role during religious and cultural activities, chieftaincy title/coronation ceremonies, singing of dirges during funeral ceremonies, and so forth. This traditional role of the African/Nigerian women puts them at a vantage point to contribute maximally to the teaching and learning of music at every level of education. The need for more women in the field of music education in Nigeria cannot be overemphasized. Today, gender equality is a major discourse in most countries of the world, Nigeria inclusive. Statistical data in the field of education and music education reveal the high ratio of male teachers/lecturers over their female counterparts in Nigerian tertiary institutions. The percentage is put at 80% Male and a distant 20% Female! This paper, therefore, examines feminine gender in Nigerian music education by tracing the involvement of women in musical practice from the pre-colonial to the post-colonial periods. The study employed both primary and secondary sources of data collection. The primary source included interviews conducted with 19 music lecturers from 8 purposively selected tertiary institutions from 4 geo-political zones of Nigeria. In addition, observation method was employed in the selected institutions. The results show, inter alia, that though there is a remarkable improvement in the rate of admission of female students into the music programme of Nigerian tertiary institutions, there is still an imbalance in the job placement in these institutions especially in the Colleges of Education which is the main focus of this research. Religious and socio-cultural factors are highly traceable to this development. This paper recommends the need for more female music teachers to be employed in the Nigerian tertiary institutions in line with the provisions stated in the Millennium Development Goals (MDGs) of the Federal Republic of Nigeria.Keywords: gender, education, music, women
Procedia PDF Downloads 2031761 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty
Authors: Ammar Y. Alqahtani
Abstract:
In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics
Procedia PDF Downloads 1361760 Assessing Gender Mainstreaming Practices in the Philippine Basic Education System
Authors: Michelle Ablian Mejica
Abstract:
Female drop-outs due to teenage pregnancy and gender-based violence in schools are two of the most contentious and current gender-related issues faced by the Department of Education (DepEd) in the Philippines. The country adopted gender mainstreaming as the main strategy to eliminate gender inequalities in all aspects of the society including education since 1990. This research examines the extent and magnitude by which gender mainstreaming is implemented in the basic education from the national to the school level. It seeks to discover the challenges faced by the central and field offices, particularly by the principals who served as decision-makers in the schools where teaching and learning take place and where opportunities that may aggravate, conform and transform gender inequalities and hierarchies exist. The author conducted surveys and interviews among 120 elementary and secondary principals in the Division of Zambales as well as selected gender division and regional focal persons within Region III- Central Luzon. The study argues that DepEd needs to review, strengthen and revitalize its gender mainstreaming because the efforts do not penetrate the schools and are not enough to lessen or eliminate gender inequalities within the schools. The study found out some of the major challenges in the implementation of gender mainstreaming as follows: absence of a national gender-responsive education policy framework, lack of gender responsive assessment and monitoring tools, poor quality of gender and development related training programs and poor data collection and analysis mechanism. Furthermore, other constraints include poor coordination mechanism among implementing agencies, lack of clear implementation strategy, ineffective or poor utilization of GAD budget and lack of teacher and learner centered GAD activities. The paper recommends the review of the department’s gender mainstreaming efforts to align with the mandate of the agency and provide gender responsive teaching and learning environment. It suggests that the focus must be on formulation of gender responsive policies and programs, improvement of the existing mechanism and conduct of trainings focused on gender analysis, budgeting and impact assessment not only for principals and GAD focal point system but also to parents and other school stakeholders.Keywords: curriculum and instruction, gender analysis, gender budgeting, gender impact assessment
Procedia PDF Downloads 3431759 An Interactive User-Oriented Approach to Optimizing Public Space Lighting
Authors: Tamar Trop, Boris Portnov
Abstract:
Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions
Procedia PDF Downloads 1331758 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 2101757 Carotenoid Bioaccessibility: Effects of Food Matrix and Excipient Foods
Authors: Birgul Hizlar, Sibel Karakaya
Abstract:
Recently, increasing attention has been given to carotenoid bioaccessibility and bioavailability in the field of nutrition research. As a consequence of their lipophilic nature and their specific localization in plant-based tissues, carotenoid bioaccessibility and bioavailability is generally quite low in raw fruits and vegetables, since carotenoids need to be released from the cellular matrix and incorporated in the lipid fraction during digestion before being absorbed. Today’s approach related to improving the bioaccessibility is to design food matrix. Recently, the newest approach, excipient food, has been introduced to improve the bioavailability of orally administered bioactive compounds. The main idea is combining food and another food (the excipient food) whose composition and/or structure is specifically designed for improving health benefits. In this study, effects of food processing, food matrix and the addition of excipient foods on the carotenoid bioaccessibility of carrots were determined. Different excipient foods (olive oil, lemon juice and whey curd) and different food matrices (grating, boiling and mashing) were used. Total carotenoid contents of the grated, boiled and mashed carrots were 57.23, 51.11 and 62.10 μg/g respectively. No significant differences among these values indicated that these treatments had no effect on the release of carotenoids from the food matrix. Contrary to, changes in the food matrix, especially mashing caused significant increase in the carotenoid bioaccessibility. Although the carotenoid bioaccessibility was 10.76% in grated carrots, this value was 18.19% in mashed carrots (p<0.05). Addition of olive oil and lemon juice as excipients into the grated carrots caused 1.23 times and 1.67 times increase in the carotenoid content and the carotenoid bioaccessibility respectively. However, addition of the excipient foods in the boiled carrot samples did not influence the release of carotenoid from the food matrix. Whereas, up to 1.9 fold increase in the carotenoid bioaccessibility was determined by the addition of the excipient foods into the boiled carrots. The bioaccessibility increased from 14.20% to 27.12% by the addition of olive oil, lemon juice and whey curd. The highest carotenoid content among mashed carrots was found in the mashed carrots incorporated with olive oil and lemon juice. This combination also caused a significant increase in the carotenoid bioaccessibility from 18.19% to 29.94% (p<0.05). When compared the results related with the effect of the treatments on the carotenoid bioaccessibility, mashed carrots containing olive oil, lemon juice and whey curd had the highest carotenoid bioaccessibility. The increase in the bioaccessibility was approximately 81% when compared to grated and mashed samples containing olive oil, lemon juice and whey curd. In conclusion, these results demonstrated that the food matrix and addition of the excipient foods had a significant effect on the carotenoid content and the carotenoid bioaccessibility.Keywords: carrot, carotenoids, excipient foods, food matrix
Procedia PDF Downloads 4541756 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 3691755 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment
Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali
Abstract:
Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells
Procedia PDF Downloads 931754 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 1251753 Urban Livelihoods and Climate Change: Adaptation Strategies for Urban Poor in Douala, Cameroon
Authors: Agbortoko Manyigbe Ayuk Nkem, Eno Cynthia Osuh
Abstract:
This paper sets to examine the relationship between climate change and urban livelihood through a vulnerability assessment of the urban poor in Douala. Urban development in Douala places priority towards industrial and city-centre development with little focus on the urban poor in terms of housing units and areas of sustenance. With the high rate of urbanisation and increased land prices, the urban poor are forced to occupy marginal lands which are mainly wetlands, wastelands and along abandoned neighbourhoods prone to natural hazards. Due to climate change and its effects, these wetlands are constantly flooded thereby destroying homes, properties, and crops. Also, most of these urban dwellers have found solace in urban agriculture as a means for survival. However, since agriculture in tropical regions like Cameroon depends largely on seasonal rainfall, the changes in rainfall pattern has led to misplaced periods for crop planting and a huge wastage of resources as rainfall becomes very unreliable with increased temperature levels. Data for the study was obtained from both primary and secondary sources. Secondary sources included published materials related to climate change and vulnerability. Primary data was obtained through focus-group discussions with some urban farmers while a stratified sampling of residents within marginal lands was done. Each stratum was randomly sampled to obtain information on different stressors related to climate change and their effect on livelihood. Findings proved that the high rate of rural-urban migration into Douala has led to increased prevalence of the urban poor and their vulnerability to climate change as evident in their constant fight against flood from unexpected sea level rise and irregular rainfall pattern for urban agriculture. The study also proved that women were most vulnerable as they depended solely on urban agriculture and its related activities like retailing agricultural products in different urban markets which to them serves as a main source of income in the attainment of basic needs for the family. Adaptation measures include the constant use of sand bags, raised makeshifts as well as cultivation along streams, planting after evidence of constant rainfall has become paramount for sustainability.Keywords: adaptation, Douala, Cameroon, climate change, development, livelihood, vulnerability
Procedia PDF Downloads 2911752 The Response of Mammal Populations to Abrupt Changes in Fire Regimes in Montane Landscapes of South-Eastern Australia
Authors: Jeremy Johnson, Craig Nitschke, Luke Kelly
Abstract:
Fire regimes, climate and topographic gradients interact to influence ecosystem structure and function across fire-prone, montane landscapes worldwide. Biota have developed a range of adaptations to historic fire regime thresholds, which allow them to persist in these environments. In south-eastern Australia, a signal of fire regime changes is emerging across these landscapes, and anthropogenic climate change is likely to be one of the main drivers of an increase in burnt area and more frequent wildfire over the last 25 years. This shift has the potential to modify vegetation structure and composition at broad scales, which may lead to landscape patterns to which biota are not adapted, increasing the likelihood of local extirpation of some mammal species. This study aimed to address concerns related to the influence of abrupt changes in fire regimes on mammal populations in montane landscapes. It first examined the impact of climate, topography, and vegetation on fire patterns and then explored the consequences of these changes on mammal populations and their habitats. Field studies were undertaken across diverse vegetation, fire severity and fire frequency gradients, utilising camera trapping and passive acoustic monitoring methodologies and the collection of fine-scale vegetation data. Results show that drought is a primary contributor to fire regime shifts at the landscape scale, while topographic factors have a variable influence on wildfire occurrence at finer scales. Frequent, high severity wildfire influenced forest structure and composition at broad spatial scales, and at fine scales, it reduced occurrence of hollow-bearing trees and promoted coarse woody debris. Mammals responded differently to shifts in forest structure and composition depending on their habitat requirements. This study highlights the complex interplay between fire regimes, environmental gradients, and biotic adaptations across temporal and spatial scales. It emphasizes the importance of understanding complex interactions to effectively manage fire-prone ecosystems in the face of climate change.Keywords: fire, ecology, biodiversity, landscape ecology
Procedia PDF Downloads 721751 The Relations Between Hans Kelsen’s Concept of Law and the Theory of Democracy
Authors: Monika Zalewska
Abstract:
Hans Kelsen was a versatile legal thinker whose achievements in the fields of legal theory, international law, and the theory of democracy are remarkable. All of the fields tackled by Kelsen are regarded as part of his “pure theory of law.” While the link between international law and Kelsen’s pure theory of law is apparent, the same cannot be said about the link between the theory of democracy and his pure theory of law. On the contrary, the general thinking concerning Kelsen’s thought is that it can be used to legitimize authoritarian regimes. The aim of this presentation is to address this concern by identifying the common ground between Kelsen’s pure theory of law and his theory of democracy and to show that they are compatible in a way that his pure theory of law and authoritarianism cannot be. The conceptual analysis of the purity of Kelsen’s theory and his goal of creating ideology-free legal science hints at how Kelsen’s pure theory of law and the theory of democracy are brought together. The presentation will first demonstrate that these two conceptions have common underlying values and meta-ethical convictions. Both are founded on relativism and a rational worldview, and the aim of both is peaceful co-existence. Second, it will be demonstrated that the separation of law and morality provides the maximum space for deliberation within democratic processes. The conclusion of this analysis is that striking similarities exist between Kelsen’s legal theory and his theory of democracy. These similarities are grounded in the Enlightenment tradition and its values, including rationality, a scientific worldview, tolerance, and equality. This observation supports the claim that, for Kelsen, legal positivism and the theory of democracy are not two separate theories but rather stem from the same set of values and from Kelsen’s relativistic worldview. Furthermore, three main issues determine Kelsen’s orientation toward a positivistic and democratic outlook. The first, which is associated with personality type, is the distinction between absolutism and relativism. The second, which is associated with the values that Kelsen favors in the social order, is peace. The third is legality, which creates the necessary condition for democracy to thrive and reveals that democracy is capable of fulfilling Kelsen’s ideal of law at its fullest. The first two categories exist in the background of Kelsen’s pure theory of law, while the latter is an inherent part of Kelsen’s concept of law. The analysis of the text concerning natural law doctrine and democracy indicates that behind the technical language of Kelsen’s pure theory of law is a strong concern with the trends that appeared after World War I. Despite his rigorous scientific mind, Kelsen was deeply humanistic. He tried to create a powerful intellectual weapon to provide strong arguments for peaceful coexistence and a rational outlook in Europe. The analysis provided by this presentation facilitates a broad theoretical, philosophical, and political understanding of Kelsen’s perspectives and, consequently, urges a strong endorsement of Kelsen’s approach to constitutional democracy.Keywords: hans kelsen, democracy, legal positivism, pure theory of law
Procedia PDF Downloads 1051750 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones
Authors: Lucas Caldas, Pablo Paulse, Karla Hora
Abstract:
Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance
Procedia PDF Downloads 1711749 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 4051748 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 1301747 Evaluation of the Trauma System in a District Hospital Setting in Ireland
Authors: Ahmeda Ali, Mary Codd, Susan Brundage
Abstract:
Importance: This research focuses on devising and improving Health Service Executive (HSE) policy and legislation and therefore improving patient trauma care and outcomes in Ireland. Objectives: The study measures components of the Trauma System in the district hospital setting of the Cavan/Monaghan Hospital Group (CMHG), HSE, Ireland, and uses the collected data to identify the strengths and weaknesses of the CMHG Trauma System organisation, to include governance, injury data, prevention and quality improvement, scene care and facility-based care, and rehabilitation. The information will be made available to local policy makers to provide objective situational analysis to assist in future trauma service planning and service provision. Design, setting and participants: From 28 April to May 28, 2016 a cross-sectional survey using World Health Organisation (WHO) Trauma System Assessment Tool (TSAT) was conducted among healthcare professionals directly involved in the level III trauma system of CMHG. Main outcomes: Identification of the strengths and weaknesses of the Trauma System of CMHG. Results: The participants who reported inadequate funding for pre hospital (62.3%) and facility based trauma care at CMHG (52.5%) were high. Thirty four (55.7%) respondents reported that a national trauma registry (TARN) exists but electronic health records are still not used in trauma care. Twenty one respondents (34.4%) reported that there are system wide protocols for determining patient destination and adequate, comprehensive legislation governing the use of ambulances was enforced, however, there is a lack of a reliable advisory service. Over 40% of the respondents reported uncertainty of the injury prevention programmes available in Ireland; as well as the allocated government funding for injury and violence prevention. Conclusions: The results of this study contributed to a comprehensive assessment of the trauma system organisation. The major findings of the study identified three fundamental areas: the inadequate funding at CMHG, the QI techniques and corrective strategies used, and the unfamiliarity of existing prevention strategies. The findings direct the need for further research to guide future development of the trauma system at CMHG (and in Ireland as a whole) in order to maximise best practice and to improve functional and life outcomes.Keywords: trauma, education, management, system
Procedia PDF Downloads 2421746 Occasional Word-Formation in Postfeminist Fiction: Cognitive Approach
Authors: Kateryna Nykytchenko
Abstract:
Modern fiction and non-fiction writers commonly use their own lexical and stylistic devices to capture a reader’s attention and bring certain thoughts and feelings to his reader. Among such devices is the appearance of one of the neologic notions – individual author’s formations: occasionalisms or nonce words. To a significant extent, the host of examples of new words occurs in chick lit genre which has experienced exponential growth in recent years. Chick Lit is a new-millennial postfeminist fiction which focuses primarily on twenty- to thirtysomething middle-class women. It brings into focus the image of 'a new woman' of the 21st century who is always fallible, funny. This paper aims to investigate different types of occasional word-formation which reflect cognitive mechanisms of conveying women’s perception of the world. Chick lit novels of Irish author Marian Keyes present genuinely innovative mixture of forms, both literary and nonliterary which is displayed in different types of occasional word-formation processes such as blending, compounding, creative respelling, etc. Crossing existing mental and linguistic boundaries, adopting herself to new and overlapping linguistic spaces, chick lit author creates new words which demonstrate the result of development and progress of language and the relationship between language, thought and new reality, ultimately resulting in hybrid word-formation (e.g. affixation or pseudoborrowing). Moreover, this article attempts to present the main characteristics of chick-lit fiction genre with the help of the Marian Keyes’s novels and their influence on occasionalisms. There has been a lack of research concerning cognitive nature of occasionalisms. The current paper intends to account for occasional word-formation as a set of interconnected cognitive mechanisms, operations and procedures meld together to create a new word. The results of the generalized analysis solidify arguments that the kind of new knowledge an occasionalism manifests is inextricably linked with cognitive procedure underlying it, which results in corresponding type of word-formation processes. In addition, the findings of the study reveal that the necessity of creating occasionalisms in postmodern fiction novels arises from the need to write in a new way keeping up with a perpetually developing world, and thus the evolution of the speaker herself and her perception of the world.Keywords: Chick Lit, occasionalism, occasional word-formation, cognitive linguistics
Procedia PDF Downloads 1801745 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 375