Search results for: black start
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1774

Search results for: black start

304 An Analysis of Emmanuel Macron's Campaign Discourse

Authors: Robin Turner

Abstract:

In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.

Keywords: Emmanuel Macron, French, discourse analysis, political discourse

Procedia PDF Downloads 234
303 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 261
302 An Inexhaustible Will of Infinite, or the Creative Will in the Psychophysiological Artistic Practice: An Analysis through Nietzsche's Will to Power

Authors: Filipa Cruz, Grecia P. Matos

Abstract:

An Inexhaustible Will of Infinite is ongoing practice-based research focused on a psychophysiological conception of body and on the creative will that seeks to examine the possibility of art being simultaneously a pacifier and an intensifier in a physiological artistic production. This is a study where philosophy and art converge in a commentary on the affection of the concept of will to power in the art world through Nietzsche’s commentaries, through the analysis of case studies and a reflection arising from artistic practice. Through Nietzsche, it is sought to compare concepts that communicate with the artistic practice since creation is an intensification and engenders perspectives. It is also a practice highly embedded in the body, in the non-verbal, in the physiology of art and in the coexistence between the sensorial and the thought. It is questioned if the physiology of art could be thought of as a thinking-feeling with no primacy of the thought over the sensorial. Art as a manifestation of the will to power participates in a comprehension of the world. In this article, art is taken as a privileged way of communication – implicating corporeal-sensorial-conceptual – and of connection between humans. Problematized is the dream and the drunkenness as intensifications and expressions of life’s comprehension. Therefore, art is perceived as suggestion and invention, where the artistic intoxication breaks limits in the experience of life, and the artist, dominated by creative forces, claims, orders, obeys, proclaims love for life. The intention is also to consider how one can start from pain to create and how one can generate new and endless artistic forms through nightmares, daydreams, impulses, intoxication, enhancement, intensification in a plurality of subjects and matters. It is taken into consideration the fact that artistic creation is something that is intensified corporeally, expanded, continuously generated and acting on bodies. It is inextinguishable and a constant movement intertwining Apollonian and Dionysian instincts of destruction and creation of new forms. The concept of love also appears associated with conquering, that, in a process of intensification and drunkenness, impels the artist to generate and to transform matter. Just like a love relationship, love in Nietzsche requires time, patience, effort, courage, conquest, seduction, obedience, and command, potentiating the amplification of knowledge of the other / the world. Interlacing Nietzsche's philosophy, not with Modern Art, but with Contemporary Art, it is argued that intoxication, will to power (strongly connected with the creative will) and love still have a place in the artistic production as creative agents.

Keywords: artistic creation, body, intensification, psychophysiology, will to power

Procedia PDF Downloads 96
301 Crafting of Paper Cutting Techniques for Embellishment of Fashion Textiles

Authors: A. Vaidya-Soocheta, K. M. Wong-Hon-Lang

Abstract:

Craft and fashion have always been interlinked. The combination of both often gives stunning results. The present study introduces ‘Paper Cutting Craft Techniques’ like the Japanese –Kirigami, Mexican –PapelPicado, German –Scherenschnitte, Polish –Wycinankito in textiles to develop innovative and novel design structures as embellishments and ornamentation. The project studies various ways of using these paper cutting techniques to obtain interesting features and delicate design patterns on fabrics. While paper has its advantages and related uses, it is fragile rigid and thus not appropriate for clothing. Fabric is sturdy, flexible, dimensionally stable and washable. In the present study, the cut out techniques develop creative design motifs and patterns to give an inventive and unique appeal to the fabrics. The beauty and fascination of lace in garments have always given them a nostalgic charm. Laces with their intricate and delicate complexity in combination with other materials add a feminine touch to a garment and give it a romantic, mysterious appeal. Various textured and decorative effects through fabric manipulation are experimented along with the use of paper cutting craft skills as an innovative substitute for developing lace or “Broderie Anglaise” effects on textiles. A number of assorted fabric types with varied textures were selected for the study. Techniques to avoid fraying and unraveling of the design cut fabrics were introduced. Fabrics were further manipulated by use of interesting prints with embossed effects on cut outs. Fabric layering in combination with assorted techniques such as cutting of folded fabric, printing, appliqué, embroidery, crochet, braiding, weaving added a novel exclusivity to the fabrics. The fabrics developed by these innovative methods were then tailored into garments. The study thus tested the feasibility and practicability of using these fabrics by designing a collection of evening wear garments based on the theme ‘Nostalgia’. The prototypes developed were complemented by designing fashion accessories with the crafted fabrics. Prototypes of accessories add interesting features to the study. The adaptation and application of this novel technique of paper cutting craft on textiles can be an innovative start for a new trend in textile and fashion industry. The study anticipates that this technique will open new avenues in the world of fashion to incorporate its use commercially.

Keywords: collection, fabric cutouts, nostalgia, prototypes

Procedia PDF Downloads 332
300 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 110
299 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 31
298 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester

Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell

Abstract:

Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.

Keywords: size distribution, traffic emissions, UFP, urban area

Procedia PDF Downloads 309
297 Bringing German History to Tourists

Authors: Gudrun Görlitz, Christian Schölzel, Alexander Vollmar

Abstract:

Sites of Jewish Life in Berlin 1933-1945. Between Persecution and Self-assertion” was realized in a project funded by the European Regional Development Fund. A smartphone app, and a associated web site enable tourists and other participants of this educational offer to learn in a serious way more about the life of Jews in the German capital during the Nazi era. Texts, photos, video and audio recordings communicate the historical content. Interactive maps (both current and historical) make it possible to use predefined or self combined routes. One of the manifold challenges was to create a broad ranged guide, in which all detailed information are well linked with each other. This enables heterogeneous groups of potential users to find a wide range of specific information, corresponding with their particular wishes and interests. The multitude of potential ways to navigate through the diversified information causes (hopefully) the users to utilize app and web site for a second or third time and with a continued interest. Therefore 90 locations, a lot of them situated in Berlin’s city centre, have been chosen. For all of them text-, picture and/or audio/video material gives extensive information. Suggested combinations of several of these “site stories” are leading to the offer of detailed excursion routes. Events and biographies are also presented. A few of the implemented biographies are especially enriched with source material concerning the aspect of (forced) migration of these persons during the Nazi time. All this was done in a close and fruitful interdisciplinary cooperation of computer scientists and historians. The suggested conference paper aims to show the challenges shaping complex source material for practical use by different user-groups in a proper technical and didactic way. Based on the historical research in archives, museums, libraries and digital resources the quantitative dimension of the project can be sized as follows: The paper focuses on the following historiographical and technical aspects: - Shaping the text material didactically for the use in new media, especially a Smartphone-App running on differing platforms; - Geo-referencing of the sites on historical and current map material; - Overlay of old and new maps to present and find the sites; - Using Augmented Reality technologies to re-visualize destroyed buildings; - Visualization of black-/white-picture-material; - Presentation of historical footage and the resulting problems to need too much storage space; - Financial and juridical aspects in gaining copyrights to present archival material.

Keywords: smartphone app, history, tourists, German

Procedia PDF Downloads 346
296 Visual Representation and the De-Racialization of Public Spaces

Authors: Donna Banks

Abstract:

In 1998 Winston James called for more research on the Caribbean diaspora and this ethnographic study, incorporating participant observation, interviews, and archival research, adds to the scholarship in this area. The research is grounded in the discipline of cultural studies but is cross-disciplinary in nature, engaging anthropology, psychology, and urban planning. This paper centers on community murals and their contribution to a more culturally diverse and representative community. While many museums are in the process of reassessing their collection, acquiring works, and developing programming to be more inclusive, and public art programs are investing millions of dollars in trying to fashion an identity in which all residents can feel included, local artists in neighborhoods in many countries have been using community murals to tell their stories. Community murals serve a historical, political, and social purpose and are an instrumental strategy in creative placemaking projects. Community murals add to the livability of an area. Even though official measurements of livability do not include race, ethnicity, and gender - which are egregious omissions - murals are a way to integrate historically underrepresented people into the wider history of a country. This paper draws attention to a creative placemaking project in the port city of Bristol, England. A city, like many others, with a history of spacializing race and racializing space. For this reason, Bristol’s Seven Saints of St. Pauls® Art & Heritage Trail, which memorializes seven Caribbean-born social and political change agents, is examined. The Seven Saints of St. Pauls® Art & Heritage Trail is crucial to the city, as well as the country, in its contribution to the de-racialization of public spaces. Within British art history, with few exceptions, portraits of non-White people who are not depicted in a subordinate role have been absent. The artist of the mural project, Michelle Curtis, has changed this long-lasting racist and hegemonic narrative. By creating seven large-scale portraits of individuals not typically represented visually, the artist has added them into Britain’s story. In these murals, however, we see more than just the likeness of a person; we are presented with a visual commentary that reflects each Saint’s hybrid identity of being both Black Caribbean and British, as well as their social and political involvement. Additionally, because the mural project is part of a heritage trail, the murals' are therapeutic and contribute to improving the well-being of residents and strengthening their sense of belonging.

Keywords: belonging, murals, placemaking, representation

Procedia PDF Downloads 69
295 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 261
294 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 102
293 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 323
292 Conservation of Ibis Statue Made of Composite Materials Dating to 3RD Intermediate Period - Late Period

Authors: Badawi Mahmoud, Eid Mohamed, Salih Hytham, Tahoun Mamdouh

Abstract:

Cultural properties made of types of materials; we can classify them broadly into three categories. There are organic cultural properties which have their origin in the animal and plant kingdoms. There are the inorganic cultural properties made of metal or stone. Then there are those made of both organic and inorganic materials such as metal with wood. Most cultural properties are made from several materials rather than from one single material. Cultural properties reveal a lot of information about the past and often have great artistic value. It is important to extend the life of cultural properties and preserve themif possible, that is intended to preserve them for future generations. The study of metallic relics usually includes examining the techniques used to make them and the extent to which they have corroded. The conservation science of archaeological artifacts demands an accurate grasp of the interior of the article, which cannot be seen. This is essential to elucidate the method of manufacture and provides information that is important for cleaning, restoration, and other processes of conservation. Conservation treatment does not ensure the prevention of further degradation of the archaeological artifact. Instead, it is an attempt to inhibit further degradation as much as possible. Ancient metallic artifacts are made of many materials. Some are made of a single metal, such as iron, copper, or bronze. There are also composite relics made of several metals. Almost all metals (except gold) corrode while they rest underground. Corrosion is caused by the interaction of oxygen, water, and various ions. Chloride ions play a major role in the advance of corrosion. Excavated metallic relics are usually scientifically examined as to their structure and materials and treated for preservation before being displayed for exhibition or stored in a storehouse. Bird statue hermit body is made of wood and legs and beak bronze, the object broken separated to three parts. This statue came to Grand Egyptian Museum – Conservation Centre (GEM-CC) Inorganic Lab. Statuette representing the god djehoty shaped of the bird (ibis) sculpture made of bronze and wood the body of statues made from wood and bronze from head and leg and founded remains of black resin maybe it found with mummy, the base installed by wooden statue of the ancient writings there dating, the archaeological unit decided the dating is 3rd intermediate period - late period. This study aims to do conservation process for this statue, attempt to inhibit further degradation as much as possible and fill fractures and cracks in the wooden part.

Keywords: inorganic materials, metal, wood, corrosion, ibis

Procedia PDF Downloads 228
291 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 109
290 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries

Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun

Abstract:

Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.

Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning

Procedia PDF Downloads 87
289 A Novel Nano-Chip Card Assay as Rapid Test for Diagnosis of Lymphatic Filariasis Compared to Nano-Based Enzyme Linked Immunosorbent Assay

Authors: Ibrahim Aly, Manal Ahmed, Mahmoud M. El-Shall

Abstract:

Filariasis is a parasitic disease caused by small roundworms. The filarial worms are transmitted and spread by blood-feeding black flies and mosquitoes. Lymphatic filariasis (Elephantiasis) is caused by Wuchereriabancrofti, Brugiamalayi, and Brugiatimori. Elimination of Lymphatic filariasis necessitates an increasing demand for valid, reliable, and rapid diagnostic kits. Nanodiagnostics involve the use of nanotechnology in clinical diagnosis to meet the demands for increased sensitivity, specificity, and early detection in less time. The aim of this study was to evaluate the nano-based enzymelinked immunosorbent assay (ELISA) and novel nano-chip card as a rapid test for detection of filarial antigen in serum samples of human filariasis in comparison with traditional -ELISA. Serum samples were collected from an infected human with filarial gathered across Egypt's governorates. After receiving informed consenta total of 45 blood samples of infected individuals residing in different villages in Gharbea governorate, which isa nonendemic region for bancroftianfilariasis, healthy persons living in nonendemic locations (20 persons), as well as sera from 20 other parasites, affected patients were collected. The microfilaria was checked in thick smears of 20 µl night blood samples collected during 20-22 hrs. All of these individuals underwent the following procedures: history taking, clinical examination, and laboratory investigations, which included examination of blood samples for microfilaria using thick blood film and serological tests for detection of the circulating filarial antigen using polyclonal antibody- ELISA, nano-based ELISA, and nano-chip card. In the present study, a recently reported polyoclonal antibody specific to tegumental filarial antigen was used in developing nano-chip card and nano-ELISA compared to traditional ELISA for the detection of circulating filarial antigen in sera of patients with bancroftianfilariasis. The performance of the ELISA was evaluated using 45 serum samples. The ELISA was positive with sera from microfilaremicbancroftianfilariasis patients (n = 36) with a sensitivity of 80 %. Circulating filarial antigen was detected in 39/45 patients who were positive for circulating filarial antigen using nano-ELISA with a sensitivity of 86.6 %. On the other hand, 42 out of 45 patients were positive for circulating filarial antigen using nano-chip card with a sensitivity of 93.3%.In conclusion, using a novel nano-chip assay could potentially be a promising alternative antigen detection test for bancroftianfilariasis.

Keywords: lymphatic filariasis, nanotechnology, rapid diagnosis, elisa technique

Procedia PDF Downloads 95
288 Ottoman Archaeology in Kostence (Constanta, Romania): A Locality on the Periphery of the Ottoman World

Authors: Margareta Simina Stanc, Aurel Mototolea, Tiberiu Potarniche

Abstract:

The city of Constanta (former Köstence) is located in the Dobrogea region, on the west shore of the Black Sea. Between 1420-1878, Dobrogea was a possession of the Ottoman Empire. Archaeological researches starting with the second half of the 20th century revealed various traces of the Ottoman period in this region. Between 2016-2018, preventive archaeological research conducted in the perimeter of the old Ottoman city of Köstence led to the discovery of structures of habitation as well as of numerous artifacts of the Ottoman period (pottery, coins, buckles, etc.). This study uses the analysis of these new discoveries to complete the picture of daily life in the Ottoman period. In 2017, in the peninsular area of Constanta, preventive archaeological research began at a point in the former Ottoman area. In the range between the current ironing level and the -1.5m depth, the Ottoman period materials appeared constantly. It is worth noting the structure of a large building that has been repaired at least once but could not be fully investigated. In parallel to this wall, there was arranged a transversally arranged brick-lined drainage channel. The drainage channel is poured into a tank (hazna), filled with various vintage materials, but mainly gilded ceramics and iron objects. This type of hazna is commonly found in Constanta for the pre-modern and modern period due to the lack of a sewage system in the peninsular area. A similar structure, probably fountain, was discovered in 2016 in another part of the old city. An interesting piece is that of a cup (probably) Persians and a bowl belonging to Kütahya style, both of the 17th century, proof of commercial routes passing through Constanta during that period and indirectly confirming the documentary testimonies of the time. Also, can be mentioned the discovery, in the year 2016, on the occasion of underwater research carried out by specialists of the department of the Constanta Museum, at a depth of 15 meters, a Turkish oil lamp (17th - the beginning of the 18th century), among other objects of a sunken ship. The archaeological pieces, in a fragmentary or integral state, found in research campaigns 2016-2018, are undergoing processing or restoration, leaving out all the available information, and establishing exact analogies. These discoveries bring new data to the knowledge of daily life during the Ottoman administration in the former Köstence, a locality on the periphery of the Islamic world.

Keywords: habitation, material culture, Ottoman administration, Ottoman archaeology, periphery

Procedia PDF Downloads 112
287 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 41
286 Thorium Resources of Georgia – Is It Its Future Energy ?

Authors: Avtandil Okrostsvaridze, Salome Gogoladze

Abstract:

In the light of exhaustion of hydrocarbon reserves of new energy resources, its search is of vital importance problem for the modern civilization. At the time of energy resource crisis, the radioactive element thorium (232Th) is considered as the main energy resource for the future of our civilization. Modern industry uses thorium in high-temperature and high-tech tools, but the most important property of thorium is that like uranium it can be used as fuel in nuclear reactors. However, thorium has a number of advantages compared to this element: Its concentration in the earth crust is 4-5 times higher than uranium; extraction and enrichment of thorium is much cheaper than of uranium; it is less radioactive; its waste products complete destruction is possible; thorium yields much more energy than uranium. Nowadays, developed countries, among them India and China, have started intensive work for creation of thorium nuclear reactors and intensive search for thorium reserves. It is not excluded that in the next 10 years these reactors will completely replace uranium reactors. Thorium ore mineralization is genetically related to alkaline-acidic magmatism. Thorium accumulations occur as in endogen marked as in exogenous conditions. Unfortunately, little is known about the reserves of this element in Georgia, as planned prospecting-exploration works of thorium have never been carried out here. Although, 3 ore occurrences of this element are detected: 1) In the Greater Caucasus Kakheti segment, in the hydrothermally altered rocks of the Lower Jurassic clay-shales, where thorium concentrations varied between 51 - 3882g/t; 2) In the eastern periphery of the Dzirula massif, in the hydrothermally alteration rocks of the cambrian quartz-diorite gneisses, where thorium concentrations varied between 117-266 g/t; 3) In active contact zone of the Eocene volcanites and syenitic intrusive in Vakijvari ore field of the Guria region, where thorium concentrations varied between 185 – 428 g/t. In addition, geological settings of the areas, where thorium occurrences were fixed, give a theoretical basis on possible accumulation of practical importance thorium ores. Besides, the Black Sea Guria region magnetite sand which is transported from Vakijvari ore field, should contain significant reserves of thorium. As the research shows, monazite (thorium containing mineral) is involved in magnetite in the form of the thinnest inclusions. The world class thorium deposit concentrations of this element vary within the limits of 50-200 g/t. Accordingly, on the basis of these data, thorium resources found in Georgia should be considered as perspective ore deposits. Generally, we consider that complex investigation of thorium should be included into the sphere of strategic interests of the state, because future energy of Georgia, will probably be thorium.

Keywords: future energy, Georgia, ore field, thorium

Procedia PDF Downloads 464
285 Ytterbium Advantages for Brachytherapy

Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev

Abstract:

High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.

Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources

Procedia PDF Downloads 465
284 The Reflexive Interaction in Group Formal Practices: The Question of Criteria and Instruments for the Character-Skills Evaluation

Authors: Sara Nosari

Abstract:

In the research field on adult education, the learning development project followed different itineraries: recently it has promoted adult transformation by practices focused on the reflexive oriented interaction. This perspective, that connects life stories and life-based methods, characterizes a transformative space between formal and informal education. Within this framework, in the Nursing Degree Courses of Turin University, it has been discussed and realized a formal reflexive path on the care work professional identity through group practices. This path compared the future care professionals with possible experiences staged by texts used with the function of a pre-tests: these texts, setting up real or believable professional situations, had the task to start a reflection on the different 'elements' of care work professional life (relationship, educational character of relationship, relationship between different care roles; or even human identity, aims and ultimate aim of care, …). The learning transformative aspect of this kind of experience-test is that it is impossible to anticipate the process or the conclusion of reflexion because they depend on two main conditions: the personal sensitivity and the specific situation. The narrated experience is not a device, it does not include any tricks to understand the answering advance; the text is not aimed at deepening the knowledge, but at being an active and creative force which takes the group to compare with problematic figures. In fact, the experience-text does not have the purpose to explain but to problematize: it creates a space of suspension to live for questioning, for discussing, for researching, for deciding. It creates a space 'open' and 'in connection' where each one, in comparing with others, has the possibility to build his/her position. In this space, everyone has to possibility to expose his/her own argumentations and to be aware of the others emerged points of view, aiming to research and find the own personal position. However, to define his/her position, it is necessary to learn to exercise character skills (conscientiousness, motivation, creativity, critical thinking, …): if these not-cognitive skills have an undisputed evidence, less evident is how to value them. The paper will reflect on the epistemological limits and possibility to 'measure' character skills, suggesting some evaluation criteria.

Keywords: transformative learning, educational role, formal/informal education, character-skills

Procedia PDF Downloads 172
283 Heat Transfer and Trajectory Models for a Cloud of Spray over a Marine Vessel

Authors: S. R. Dehghani, G. F. Naterer, Y. S. Muzychka

Abstract:

Wave-impact sea spray creates many droplets which form a spray cloud traveling over marine objects same as marine vessels and offshore structures. In cold climates such as Arctic reigns, sea spray icing, which is ice accretion on cold substrates, is strongly dependent on the wave-impact sea spray. The rate of cooling of droplets affects the process of icing that can yield to dry or wet ice accretion. Trajectories of droplets determine the potential places for ice accretion. Combining two models of trajectories and heat transfer for droplets can predict the risk of ice accretion reasonably. The majority of the cooling of droplets is because of droplet evaporations. In this study, a combined model using trajectory and heat transfer evaluate the situation of a cloud of spray from the generation to impingement. The model uses some known geometry and initial information from the previous case studies. The 3D model is solved numerically using a standard numerical scheme. Droplets are generated in various size ranges from 7 mm to 0.07 mm which is a suggested range for sea spray icing. The initial temperature of droplets is considered to be the sea water temperature. Wind velocities are assumed same as that of the field observations. Evaluations are conducted using some important heading angles and wind velocities. The characteristic of size-velocity dependence is used to establish a relation between initial sizes and velocities of droplets. Time intervals are chosen properly to maintain a stable and fast numerical solution. A statistical process is conducted to evaluate the probability of expected occurrences. The medium size droplets can reach the highest heights. Very small and very large droplets are limited to lower heights. Results show that higher initial velocities create the most expanded cloud of spray. Wind velocities affect the extent of the spray cloud. The rate of droplet cooling at the start of spray formation is higher than the rest of the process. This is because of higher relative velocities and also higher temperature differences. The amount of water delivery and overall temperature for some sample surfaces over a marine vessel are calculated. Comparing results and some field observations show that the model works accurately. This model is suggested as a primary model for ice accretion on marine vessels.

Keywords: evaporation, sea spray, marine icing, numerical solution, trajectory

Procedia PDF Downloads 199
282 The Short-Term Stress Indicators in Home and Experimental Dogs

Authors: Madara Nikolajenko, Jevgenija Kondratjeva

Abstract:

Stress is a response of the body to physical or psychological environmental stressors. Cortisol level in blood serum is determined as the main indicator of stress, but the blood collection, the animal preparation and other activities can cause unpleasant conditions and induce increase of these hormones. Therefore, less invasive methods are searched to determine stress hormone levels, for example, by measuring the cortisol level saliva. The aim of the study is to find out the changes of stress hormones in blood and saliva in home and experimental dogs in simulated short-term stress conditions. The study included clinically healthy experimental beagle dogs (n=6) and clinically healthy home American Staffordshire terriers (n=6). The animals were let into a fenced area to adapt. Loud drum sounds (in cooperation with 'Andžeja Grauda drum school') were used as a stressor. Blood serum samples were taken for sodium, potassium, glucose and cortisol level determination and saliva samples for cortisol determination only. Control parameters were taken immediately before the start of the stressor, and next samples were taken immediately after the stress. The last measurements were taken two hours after the stress. Electrolyte levels in blood serum were determined using direction selective electrode method (ILab Aries analyzer) and cortisol in blood serum and saliva using electrochemical luminescence method (Roche Diagnostics). Blood glucose level was measured with glucometer (ACCU-CHECK Active test strips). Cortisol level in the blood increased immediately after the stress in all home dogs (P < 0,05), but only in 33% (P < 0,05) of the experimental dogs. After two hours the measurement decreased in 83% (P < 0,05) of home dogs (in 50% returning to the control point) and in 83% (P < 0,05) of the experimental dogs. Cortisol in saliva immediately after the stress increased in 50% (P > 0,05) of home dogs and in 33% (P > 0,05) of the experimental dogs. After two hours in 83% (P > 0,05) of the home animals, the measurements decreased, only in 17% of the experimental dogs it decreased as well, while in 49% measurement was undetectable due to the lack of material. Blood sodium, potassium, and glucose measurements did not show any significant changes. The combination of short-term stress indicators, when, after the stressor, all indicators should immediately increase and decrease after two hours, confirmed in none of the animals. Therefore the authors can conclude that each animal responds to a stressful situation with different physiological mechanisms and hormonal activity. Cortisol level in saliva and blood is released with the different speed and is not an objective indicator of acute stress.

Keywords: animal behaivor, cortisol, short-term stress, stress indicators

Procedia PDF Downloads 245
281 Alkali Activation of Fly Ash, Metakaolin and Slag Blends: Fresh and Hardened Properties

Authors: Weiliang Gong, Lissa Gomes, Lucile Raymond, Hui Xu, Werner Lutze, Ian L. Pegg

Abstract:

Alkali-activated materials, particularly geopolymers, have attracted much interest in academia. Commercial applications are on the rise, as well. Geopolymers are produced typically by a reaction of one or two aluminosilicates with an alkaline solution at room temperature. Fly ash is an important aluminosilicate source. However, using low-Ca fly ash, the byproduct of burning hard or black coal reacts and sets slowly at room temperature. The development of mechanical durability, e.g., compressive strength, is slow as well. The use of fly ashes with relatively high contents ( > 6%) of unburned carbon, i.e., high loss on ignition (LOI), is particularly disadvantageous as well. This paper will show to what extent these impediments can be mitigated by mixing the fly ash with one or two more aluminosilicate sources. The fly ash used here is generated at the Orlando power plant (Florida, USA). It is low in Ca ( < 1.5% CaO) and has a high LOI of > 6%. The additional aluminosilicate sources are metakaolin and blast furnace slag. Binary fly ash-metakaolin and ternary fly ash-metakaolin-slag geopolymers were prepared. Properties of geopolymer pastes before and after setting have been measured. Fresh mixtures of aluminosilicates with an alkaline solution were studied by Vicat needle penetration, rheology, and isothermal calorimetry up to initial setting and beyond. The hardened geopolymers were investigated by SEM/EDS and the compressive strength was measured. Initial setting (fluid to solid transition) was indicated by a rapid increase in yield stress and plastic viscosity. The rheological times of setting were always smaller than the Vicat times of setting. Both times of setting decreased with increasing replacement of fly ash with blast furnace slag in a ternary fly ash-metakaolin-slag geopolymer system. As expected, setting with only Orlando fly ash was the slowest. Replacing 20% fly ash with metakaolin shortened the set time. Replacing increasing fractions of fly ash in the binary system by blast furnace slag (up to 30%) shortened the time of setting even further. The 28-day compressive strength increased drastically from < 20 MPa to 90 MPa. The most interesting finding relates to the calorimetric measurements. The use of two or three aluminosilicates generated significantly more heat (20 to 65%) than the calculated from the weighted sum of the individual aluminosilicates. This synergetic heat contributes or may be responsible for most of the increase of compressive strength of our binary and ternary geopolymers. The synergetic heat effect may be also related to increased incorporation of calcium in sodium aluminosilicate hydrate to form a hybrid (N,C)A-S-H) gel. The time of setting will be correlated with heat release and maximum heat flow.

Keywords: alkali-activated materials, binary and ternary geopolymers, blends of fly ash, metakaolin and blast furnace slag, rheology, synergetic heats

Procedia PDF Downloads 96
280 Influence of Titanium Oxide on Crystallization, Microstructure and Mechanical Behavior of Barium Fluormica Glass-Ceramics

Authors: Amit Mallik, Anil K. Barik, Biswajit Pal

Abstract:

The galloping advancement of research work on glass-ceramics stems from their wide applications in electronic industry and also to some extent in application oriented medical dentistry. TiO2, even in low concentration has been found to strongly influence the physical and mechanical properties of the glasses. Glass-ceramics is a polycrystalline ceramic material produced through controlled crystallization of glasses. Crystallization is accomplished by subjecting the suitable parent glasses to a regulated heat treatment involving the nucleation and growth of crystal phases in the glass. Mica glass-ceramics is a new kind of glass-ceramics based on the system SiO2•MgO•K2O•F. The predominant crystalline phase is synthetic fluormica, named fluorophlogopite. Mica containing glass-ceramics flaunt an exceptional feature of machinability apart from their unique thermal and chemical properties. Machinability arises from the randomly oriented mica crystals with a 'house of cards' microstructures allowing cracks to propagate readily along the mica plane but hindering crack propagation across the layers. In the present study, we have systematically investigated the crystallization, microstructure and mechanical behavior of barium fluorophlogopite mica-containing glass-ceramics of composition BaO•4MgO•Al2O3•6SiO2•2MgF2 nucleated by addition of 2, 4, 6 and 8 wt% TiO2. The glass samples were prepared by the melting technique. After annealing, different batches of glass samples for nucleation were fired at 730°C (2wt% TiO2), 720°C (4 wt% TiO2), 710°C (6 wt% TiO2) and 700°C (8 wt% TiO2) batches respectively for 2 h and ultimately heated to corresponding crystallization temperatures. The glass batches were analyzed by differential thermal analysis (DTA) and x-ray diffraction (XRD), scanning electron microscopy (SEM) and micro hardness indenter. From the DTA study, it is found that the fluorophlogopite mica crystallization exotherm appeared in the temperature range 886–903°C. Glass transition temperature (Tg) and crystallization peak temperature (Tp) increased with increasing TiO2 content up to 4 wt% beyond this weight% the glass transition temperature (Tg) and crystallization peak temperature (Tp) start to decrease with increasing TiO2 content up to 8 wt%. Scanning electron microscopy confirms the development of an interconnected ‘house of cards’ microstructure promoted by TiO2 as a nucleating agent. The increase in TiO2 content decreases the vicker’s hardness values in glass-ceramics.

Keywords: crystallization, fluormica glass, ‘house of cards’ microstructure, hardness

Procedia PDF Downloads 221
279 The Impact of the Length of Time Spent on the Street on Adjustment to Homelessness

Authors: Jakub Marek, Marie Vagnerova, Ladislav Csemy

Abstract:

Background: The length of time spent on the street influences the degree of adjustment to homelessness. Over the years spent sleeping rough, homeless people gradually lose the ability to control their lives and their return to mainstream society becomes less and less likely. Goals: The aim of the study was to discover whether and how men who have been sleeping rough for more than ten years differ from those who have been homeless for four years or less. Methods: The research was based on a narrative analysis of in-depth interviews focused on the respondent’s entire life story, i.e. their childhood, adolescence, and the period of adulthood preceding homelessness. It also asked the respondents about how they envisaged the future. The group under examination comprised 51 homeless men aged 37 – 54. The first subgroup contained 29 men who have been sleeping rough for 10 – 21 years, the second group contained 22 men who have been homeless for four years or less. Results: Men who have been sleeping rough for more than ten years had problems adapting as children. They grew up in a problematic family or in an institution and acquired only a rudimentary education. From the start they had problems at work, found it difficult to apply themselves, and found it difficult to hold down a job. They tend to have high-risk personality traits and often a personality disorder. Early in life they had problems with alcohol or drugs and their relationships were unsuccessful. If they have children, they do not look after them. They are reckless even in respect of the law and often commit crime. They usually ended up on the street in their thirties. Most of this subgroup of homeless people lack motivation and the will to make any fundamental change to their lives. They identify with the homeless community and have no other contacts. Men who have been sleeping rough for four years or less form two subgroups. There are those who had a normal childhood, attended school and found work. They started a family but began to drink, and as a consequence lost their family and their job. Such men end up on the street between the ages of 35 and 40. And then there are men who become homeless after the age of 40 because of an inability to cope with a difficult situation, e.g. divorce or indebtedness. They are not substance abusers and do not have a criminal record. Such people can be offered effective assistance to return to mainstream society by the social services because they have not yet fully self-identified with the homeless community and most of them have retained the necessary abilities and skills. Conclusion: The length of time a person has been homeless is an important factor in respect of social prevention. It is clear that the longer a person is homeless, the worse are their chances of being reintegrated into mainstream society.

Keywords: risk factors, homelessness, chronicity, narrative analysis

Procedia PDF Downloads 135
278 The Study of Intangible Assets at Various Firm States

Authors: Gulnara Galeeva, Yulia Kasperskaya

Abstract:

The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.

Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix

Procedia PDF Downloads 186
277 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.

Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture

Procedia PDF Downloads 243
276 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 59
275 Music Education is Languishing in Rural South African Schools as Revealed Through Education Students

Authors: E. N. Jansen van Vuuren

Abstract:

When visiting Foundation Phase (FP) students during their Teaching Practice at schools in rural Mpumalanga, the lack of music education is evident through the absence of musical sounds, with the exception of a limited repertoire of songs that are sung by all classes everywhere you go. The absence of music teaching resources such as posters and music instruments add to the perception that generalist teachers in the FP are not teaching music. Pre-service students also acknowledge that they have never seen a music class being taught during their teaching practice visits at schools. This lack of music mentoring impacts the quality of teachers who are about to enter the workforce and ultimately results in the perpetuation of no music education in many rural schools. The situation in more affluent schools present a contrasting picture with music education being given a high priority and generalist teachers often being supported by music specialists, paid for by the parents. When student teachers start their music course, they have limited knowledge to use as a foundation for their studies. The aim of the study was to ascertain the music knowledge that students gained throughout their school careers so that the curriculum could be adapted to suit their needs. By knowing exactly what pre-service teachers know about music, the limited tuition time at tertiary level can be used in the most suitable manner and concentrate on filling the knowledge gaps. Many scholars write about the decline of music education in South African schools and mention reasons, but the exact music knowledge void amongst students does not feature in the studies. Knowing the parameters of students’ music knowledge will empower lecturers to restructure their curricula to meet the needs of pre-service students. The research question asks, “what is the extent of the music void amongst rural pre-service teachers in a B.Ed. FP course at an African university?” This action research was done using a pragmatic paradigm and mixed methodology. First year students in the cohort studying for a B.Ed. in FP were requested to complete an online baseline assessment to determine the status quo. This assessment was compiled using the CAPS music content for Grade R to 9. The data was sorted using the elements of music as a framework. Findings indicate that students do not have a suitable foundation in music education despite supposedly having had music tuition from grade R to grade 9. Knowing the content required to fill the lack of knowledge provides academics with valuable information to amend their curricula and to ensure that future teachers will be able to provide rural learners with the same foundations in music as those received by learners in more affluent schools. It is only then that the rich music culture of the African continent will thrive.

Keywords: generalist educators, music education, music curriculum, pre-service teachers

Procedia PDF Downloads 45