Search results for: object relational fact
658 Emerging VC Industry and the Important Role of Marketing Expectations in Project Selection: Evidence on Russian Data
Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova
Abstract:
Currently, the venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating prosperity of the modern economy. Actually, in Russia there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. Because of the research, the participation of investors with first-class reputation has a small impact on an indicator of the value of investment of the second round. The expected positive dependence of the second round investments on the forecasted market growth rate now of the deal is also rejected. So, the most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams which can attract more money on the start, and the target market growth is not the factor of crucial importance.Keywords: venture industry, venture investment, determinants of the venture sector development, IT-sector
Procedia PDF Downloads 352657 Influence of Various Disaster Scenarios Assumption to the Advance Creation of Wide-Area Evacuation Plan Confronting Natural Disasters
Authors: Nemat Mohammadi, Yuki Nakayama
Abstract:
After occurring Great East Japan earthquake and as a consequence the invasion of an extremely large Tsunami to the city, obligated many local governments to take into account certainly these kinds of issues. Poor preparation of local governments to deal with such kinds of disasters at that time and consequently lack of assistance delivery for local residents caused thousands of civilian casualties as well as billion dollars of economic damages. Those local governments who are responsible for governing such coastal areas, have to consider some countermeasures to deal with these natural disasters, prepare a comprehensive evacuation plan and contrive some feasible emergency plans for the purpose of victims’ reduction as much as possible. Under this evacuation plan, the local government should contemplate more about the traffic congestion during wide-area evacuation operation and estimate the minimum essential time to evacuate the whole city completely. This challenge will become more complicated for the government when the people who are affected by disasters are not only limited to the normal informed citizens but also some pregnant women, physically handicapped persons, old age citizens and foreigners or tourists who are not familiar with that conditions as well as local language are involved. The important issue to deal with this challenge is that how to inform these people to take a proper action right away noticing the Tsunami is coming. After overcoming this problem, next significant challenge is even more considerable. Next challenge is to evacuate the whole residents in a short period of time from the threated area to the safer shelters. In fact, most of the citizens will use their own vehicles to evacuate to the designed shelters and some of them will use the shuttle buses which are provided by local governments. The problem will arise when all residents want to escape from the threated area simultaneously and consequently creating a traffic jam on evacuation routes which will cause to prolong the evacuation time. Hence, this research mostly aims to calculate the minimum essential time to evacuate each region inside the threated area and find the evacuation start point for each region separately. This result will help the local government to visualize the situations and conditions during disasters and assist them to reduce the possible traffic jam on evacuation routes and consequently suggesting a comprehensive wide-area evacuation plan during natural disasters.Keywords: BPR formula, disaster scenarios, evacuation completion time, wide-area evacuation
Procedia PDF Downloads 211656 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy
Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini
Abstract:
The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering
Procedia PDF Downloads 223655 The AU Culture Platform Approach to Measure the Impact of Cultural Participation on Individuals
Authors: Sendy Ghirardi, Pau Rausell Köster
Abstract:
The European Commission increasingly pushes cultural policies towards social outcomes and local and regional authorities also call for culture-driven strategies for local development and prosperity and therefore, the measurement of cultural participation becomes increasingly more significant for evidence-based policy-making processes. Cultural participation involves various kinds of social and economic spillovers that combine social and economic objectives of value creation, including social sustainability and respect for human values. Traditionally, from the economic perspective, cultural consumption is measured by the value of financial transactions in purchasing, subscribing to, or renting cultural equipment and content, addressing the market value of cultural products and services. The main sources of data are the household spending survey and merchandise trade survey, among others. However, what characterizes the cultural consumption is that it is linked with the hedonistic and affective dimension rather than the utilitarian one. In fact, nowadays, more and more attention is being paid to the social and psychological dimensions of culture. The aim of this work is to present a comprehensive approach to measure the impacts of cultural participation and cultural users’ behaviour, combining both socio-psychological and economic approaches. The model combines contingent evaluation techniques with the individual characteristic and perception analysis of the cultural experiences to evaluate the cognitive, aesthetic, emotive and social impacts of cultural participation. To investigate the comprehensive approach to measure the impact of the cultural events on individuals, the research has been designed on the basis of prior theoretical development. A deep literature methodology has been done to develop the theoretical model applied to the web platform to measure the impacts of cultural experience on individuals. The developed framework aims to become a democratic tool for evaluating the services that cultural or policy institutions can adopt through the use of an interacting platform that produces big data benefiting academia, cultural management and policies. The Au Culture is a prototype based on an application that can be used on mobile phones or any other digital platform. The development of the AU Culture Platform has been funded by the Valencian Innovation Agency (Government of the Region of Valencia) and it is part of the Horizon 2020 project MESOC.Keywords: comprehensive approach, cultural participation, economic dimension, socio-psychological dimension
Procedia PDF Downloads 115654 Gains and Pitfalls of Participating on International Staff Exchange Programs: Individual Experiences of Academic Staff of Makerere University, Uganda
Authors: David Onen
Abstract:
Staff exchanges amongst different work organizations are a growing international phenomenon. In higher education in particular, it is not only the staff participating on international exchange programs, but their students as well. The practice of exchanging staff is premised on the belief that participating members of staff would not only get the chance to network with colleagues from partner institutions but also gain the opportunity for knowledge sharing and skills development. As a result, it would not only benefit the participating individual staff but their institutions too. However, in practice, staff exchange programs everywhere are not all ‘a bed of roses’. In fact, some of the programs seem to be laden with unapparent source of trouble or danger for the participating staff. This paper is a report on an on-going study investigating the experiences of members of academic staff of Makerere University in Uganda who have ever participated on international staff exchange programs. The study is aimed at documenting individual experiences in order to stimulate, not only a debate, but practical ways of enriching the experiences of staff who engage on well-meant international staff exchange programs. The study has employed an exploratory survey research design in which self-administered questionnaire and interview guide are being used to collect data from university academic staff respondents selected through snow-ball and purposive sampling techniques. Data have been analysed with the use of appropriate descriptive and inferential statistics as well as content analysis techniques. Preliminary study findings reveal that the majority of the respondents (95.5%) were, to a large extent, fully satisfied with their participation on the staff exchange programs. Many attested to gaining new experience (97%), networking (75%), gaining new knowledge (94%), acquiring new skills (88%), and therefore bringing to their institutions something ‘new’ and ‘beneficial’. However, a reasonably large percentage (57%) of the participants too expressed dissatisfaction in the institutional support that Makerere University gave them during their participation on the exchange programs. Some respondents reported about the ‘unfriendly welcome’ they received upon returning ‘home’ because colleagues detested how they were chosen to participate on such programs. The researcher thus concluded that international staff exchange programs are truly beneficial to both the participating staff and their institutions though with pitfalls. The researcher thus recommended for mutual and preferably equal engagement of the participating institutions on staff exchange programs if such programs are to benefit both the participating staff and institutions. Besides, exchange programs require clear terms of cooperation including on how staff are selected, facilitated and what are expected of the sending and host institutions as well as the concerned staff.Keywords: gains, exchange programs, higher education, pitfalls
Procedia PDF Downloads 344653 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 120652 An Inexhaustible Will of Infinite, or the Creative Will in the Psychophysiological Artistic Practice: An Analysis through Nietzsche's Will to Power
Authors: Filipa Cruz, Grecia P. Matos
Abstract:
An Inexhaustible Will of Infinite is ongoing practice-based research focused on a psychophysiological conception of body and on the creative will that seeks to examine the possibility of art being simultaneously a pacifier and an intensifier in a physiological artistic production. This is a study where philosophy and art converge in a commentary on the affection of the concept of will to power in the art world through Nietzsche’s commentaries, through the analysis of case studies and a reflection arising from artistic practice. Through Nietzsche, it is sought to compare concepts that communicate with the artistic practice since creation is an intensification and engenders perspectives. It is also a practice highly embedded in the body, in the non-verbal, in the physiology of art and in the coexistence between the sensorial and the thought. It is questioned if the physiology of art could be thought of as a thinking-feeling with no primacy of the thought over the sensorial. Art as a manifestation of the will to power participates in a comprehension of the world. In this article, art is taken as a privileged way of communication – implicating corporeal-sensorial-conceptual – and of connection between humans. Problematized is the dream and the drunkenness as intensifications and expressions of life’s comprehension. Therefore, art is perceived as suggestion and invention, where the artistic intoxication breaks limits in the experience of life, and the artist, dominated by creative forces, claims, orders, obeys, proclaims love for life. The intention is also to consider how one can start from pain to create and how one can generate new and endless artistic forms through nightmares, daydreams, impulses, intoxication, enhancement, intensification in a plurality of subjects and matters. It is taken into consideration the fact that artistic creation is something that is intensified corporeally, expanded, continuously generated and acting on bodies. It is inextinguishable and a constant movement intertwining Apollonian and Dionysian instincts of destruction and creation of new forms. The concept of love also appears associated with conquering, that, in a process of intensification and drunkenness, impels the artist to generate and to transform matter. Just like a love relationship, love in Nietzsche requires time, patience, effort, courage, conquest, seduction, obedience, and command, potentiating the amplification of knowledge of the other / the world. Interlacing Nietzsche's philosophy, not with Modern Art, but with Contemporary Art, it is argued that intoxication, will to power (strongly connected with the creative will) and love still have a place in the artistic production as creative agents.Keywords: artistic creation, body, intensification, psychophysiology, will to power
Procedia PDF Downloads 119651 Shaping Work Engagement through Intra-Organizational Coopetition: Case Study of the University of Zielona Gora in Poland
Authors: Marta Moczulska
Abstract:
One of the most important aspects of human management in an organization is the work engagement. In spite of the different perspectives of engagement, it is possible to see that it is expressed in the activity of the individual involved in the performance of tasks, the functioning of the organization. At the same time is considered not only in behavioural but also cognitive and emotional dimensions. Previous studies were related to sources, predictors of engagement and determinants, including organizational ones. Attention was paid to the importance of needs (including belonging, success, development, sense of work), values (such as trust, honesty, respect, justice) or interpersonal relationships, especially with the supervisor. Taking them into account and theories related to human acting, behaviour in the organization, interactions, it was recognized that engagement can be shaped through cooperation and competition. It was assumed that to shape the work engagement, it is necessary to simultaneously cooperate and compete in order to reduce the weaknesses of each of these activities and strengthen the strengths. Combining cooperation and competition is defined as 'coopetition'. However, research conducted in this field is primarily concerned with relations between companies. Intra-organizational coopetition is mainly considered as competing organizational branches or units (cross-functional coopetition). Less attention is paid to competing groups or individuals. It is worth noting the ambiguity of the concepts of cooperation and rivalry. Taking into account the terms used and their meaning, different levels of cooperation and forms of competition can be distinguished. Thus, several types of intra-organizational coopetition can be identified. The article aims at defining the potential for work engagement through intra-organizational coopetition. The aim of research was to know how levels of cooperation in competition conditions influence engagement. It is assumed that rivalry (positive competition) between teams (the highest level of cooperation) is a type of coopetition that contributes to working engagement. Qualitative research will be carried out among students of the University of Zielona Gora, realizing various types of projects. The first research groups will be students working in groups on one project for three months. The second research group will be composed of students working in groups on several projects in the same period (three months). Work engagement will be determined using the UWES questionnaire. Levels of cooperation will be determined using the author's research tool. Due to the fact that the research is ongoing, results will be presented in the final paper.Keywords: competition, cooperation, intra-organizational coopetition, work engagement
Procedia PDF Downloads 145650 The Representation of the Medieval Idea of Ugliness in Messiaen's Saint François d’Assise
Authors: Nana Katsia
Abstract:
This paper explores the ways both medieval and medievalist conceptions of ugliness might be linked to the physical and spiritual transformation of the protagonists and how it is realised through specific musical rhythm, such as the dochmiac rhythm in the opera. As Eco and Henderson note, only one kind of ugliness could be represented in conformity with nature in the Middle Ages without destroying all aesthetic pleasure and, in turn, artistic beauty: namely, a form of ugliness which arouses disgust. Moreover, Eco explores the fact that the enemies of Christ who condemn, martyr, and crucify him are represented as wicked inside. In turn, the representation of inner wickedness and hostility toward God brings with it outward ugliness, coarseness, barbarity, and rage. Ultimately these result in the deformation of the figure. In all these regards, the non-beautiful is represented here as a necessary phase, which is not the case with classical (the ancient Greek) concepts of Beauty. As we can see, the understanding of disfigurement and ugliness in the Middle Ages was both varied and complex. In the Middle Ages, the disfigurement caused by leprosy (and other skin and bodily conditions) was interpreted, in a somewhat contradictory manner, as both a curse and a gift from God. Some saints’ lives even have the saint appealing to be inflicted with the disease as part of their mission toward true humility. We shall explore that this ‘different concept’ of ugliness (non-classical beauty) might be represented in Messiaen’s opera. According to Messiaen, the Leper and Saint François are the principal characters of the third scene, as both of them will be transformed, and a double miracle will take place in the process. Messiaen mirrors the idea of the true humility of Saint’s life and positions Le Baiser au Lépreux as the culmination of the first act. The Leper’s character represents his physical and spiritual disfigurement, which are healed after the miracle. So, the scene can be viewed as an encounter between beauty and ugliness, and that much of it is spent in a study of ugliness. Dochmiac rhythm is one of the most important compositional elements in the opera. It plays a crucial role in the process of creating a dramatic musical narrative and structure in the composition. As such, we shall explore how Messiaen represents the medieval idea of ugliness in the opera through particular musical elements linked to the main protagonists’ spiritual or physical ugliness; why Messiaen makes reference to dochmiac rhythm, and how they create the musical and dramatic context in the opera for the medieval aesthetic category of ugliness.Keywords: ugliness in music, medieval time, saint françois d’assise, messiaen
Procedia PDF Downloads 146649 Sustainable Organization for Sustainable Strategy: An Empirical Evidence
Authors: Lucia Varra, Marzia Timolo
Abstract:
The interest of scholars towards corporate sustainability has strengthened in recent years in parallel with the growing need to undertake paths of cultural and organizational change, as a way for greater competitiveness and stakeholders’ satisfaction. In fact, studies on the business sustainability, while on the one hand have integrated the three dimensions of sustainability that existed for some time in the economic approaches (economic, environmental and social dimensions), on the other hand did not give rise to an organic construct that puts together the aspects of strategic management with corporate social responsibility and even less with the organizational issues. Therefore some important questions remain open: Which organizational structure and which operational mechanisms are coherent or propitious to a sustainability strategy? Existing studies appear to be fragmented, although some aspects have shared importance: knowledge management, human resource, management, leadership, innovation, etc. The construction of a model of sustainable organization that supports the sustainability strategy no longer seems to be postponed, as is its connection with the main practices of measuring corporate social responsibility performance. The paper aims to identify the organizational characteristics of a sustainable corporate. To this end, from a theoretical point of view the work examines the main existing literary contributions and, from a practical point of view, it presents a business case referring to a service organization that for years has undertaken the sustainability strategy. This paper is divided into two parts: the first part concerns a review of the main articles on the strategic management topic and the main organizational issues raised by the literature, such as knowledge management, leadership, innovation, etc.; later, a modeling of the main variables examined by scholars and an integration of these with the international measurement standards of CSR is proposed. In the second part, using the methodology of the case study company, the hypotheses and the structure of the proposed model that aims to integrate the strategic issues with the organizational aspects and measurement of sustainability performance, are applied to an Italian company, which has some organizational and human resource management interventions are in place to align strategic decisions with the structure and operating mechanisms of the structure. The case presented supports the hypotheses of the model.Keywords: CSR, strategic management, sustainable leadership, sustainable human resource management, sustainable organization
Procedia PDF Downloads 102648 Investigating the Thermal Comfort Properties of Mohair Fabrics
Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman
Abstract:
Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance
Procedia PDF Downloads 141647 Frustration Measure for Dipolar Spin Ice and Spin Glass
Authors: Konstantin Nefedev, Petr Andriushchenko
Abstract:
Usually under the frustrated magnetics, it understands such materials, in which ones the interaction between located magnetic moments or spins has competing character, and can not to be satisfied simultaneously. The most well-known and simplest example of the frustrated system is antiferromagnetic Ising model on the triangle. Physically, the existence of frustrations means, that one cannot select all three pairs of spins anti-parallel in the basic unit of the triangle. In physics of the interacting particle systems, the vector models are used, which are constructed on the base of the pair-interaction law. Each pair interaction energy between one-component vectors can take two opposite in sign values, excluding the case of zero. Mathematically, the existence of frustrations in system means that it is impossible to have all negative energies of pair interactions in the Hamiltonian even in the ground state (lowest energy). In fact, the frustration is the excitation, which leaves in system, when thermodynamics does not work, i.e. at the temperature absolute zero. The origin of the frustration is the presence at least of one ''unsatisfied'' pair of interacted spins (magnetic moments). The minimal relative quantity of these excitations (relative quantity of frustrations in ground state) can be used as parameter of frustration. If the energy of the ground state is Egs, and summary energy of all energy of pair interactions taken with a positive sign is Emax, that proposed frustration parameter pf takes values from the interval [0,1] and it is defined as pf=(Egs+Emax)/2Emax. For antiferromagnetic Ising model on the triangle pf=1/3. We calculated the parameters of frustration in thermodynamic limit for different 2D periodical structures of Ising dipoles, which were on the ribs of the lattice and interact by means of the long-range dipolar interaction. For the honeycomb lattice pf=0.3415, triangular - pf=0.2468, kagome - pf=0.1644. All dependencies of frustration parameter from 1/N obey to the linear law. The given frustration parameter allows to consider the thermodynamics of all magnetic systems from united point of view and to compare the different lattice systems of interacting particle in the frame of vector models. This parameter can be the fundamental characteristic of frustrated systems. It has no dependence from temperature and thermodynamic states, in which ones the system can be found, such as spin ice, spin glass, spin liquid or even spin snow. It shows us the minimal relative quantity of excitations, which ones can exist in system at T=0.Keywords: frustrations, parameter of order, statistical physics, magnetism
Procedia PDF Downloads 169646 Reasons for Lack of an Ideal Disinfectant after Dental Treatments
Authors: Ilma Robo, Saimir Heta, Rialda Xhizdari, Kers Kapaj
Abstract:
Background: The ideal disinfectant for surfaces, instruments, air, skin, both in dentistry and in the fields of medicine, does not exist.This is for the sole reason that all the characteristics of the ideal disinfectant cannot be contained in one; these are the characteristics that if one of them is emphasized, it will conflict with the other. A disinfectant must be stable, not be affected by changes in the environmental conditions where it stands, which means that it should not be affected by an increase in temperature or an increase in the humidity of the environment. Both of these elements contradict the other element of the idea of an ideal disinfectant, as they disrupt the solubility ratios of the base substance of the disinfectant versus the diluent. Material and methods: The study aims to extract the constant of each disinfectant/antiseptic used during dental disinfection protocols, accompanied by the side effects of the surface of the skin or mucosa where it is applied in the role of antiseptic. In the end, attempts were made to draw conclusions about the best possible combination for disinfectants after a dental procedure, based on the data extracted from the basic literature required during the development of the pharmacology module, as a module in the formation of a dentist, against data published in the literature. Results: The sensitivity of the disinfectant to changes in the atmospheric conditions of the environment where it is kept is a known fact. The care against this element is always accompanied by the advice on the application of the specific disinfectant, in order to have the desired clinical result. The constants of disinfectants according to the classification based on the data collected and presented are for alcohols 70-120, glycols 0.2, aldehydes 30-200, phenols 15-60, acids 100, povidone iodine halogens 5-75, hypochlorous acid halogens 150, sodium hypochlorite halogens 30-35, oxidants 18-60, metals 0.2-10. The part of halogens should be singled out, where specific results were obtained according to the representatives of this class, since it is these representatives that find scope for clinical application in dentistry. Conclusions: The search for the "ideal", in the conditions where its defining criteria are also established, not only for disinfectants but also for any medication or pharmaceutical product, is an ongoing search, without any definitive results. In this mine of data in the published literature if there is something fixed, calculable, such as the specific constant for disinfectants, the search for the ideal is more concrete. During the disinfection protocols, different disinfectants are applied since the field of action is different, including water, air, aspiration devices, tools, disinfectants used in full accordance with the production indications.Keywords: disinfectant, constant, ideal, side effects
Procedia PDF Downloads 69645 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains
Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron
Abstract:
Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum
Procedia PDF Downloads 78644 Utilising Indigenous Knowledge to Design Dykes in Malawi
Authors: Martin Kleynhans, Margot Soler, Gavin Quibell
Abstract:
Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi
Procedia PDF Downloads 279643 An Evaluation of the Relationship between the Anthropometric Measurements and Blood Lipid Profiles in Adolescents
Authors: Nalan Hakime Nogay
Abstract:
Childhood obesity is a significant health issue that is currently on the rise all over the world. In recent years, the relationship between childhood obesity and cardiovascular disease risk has been pointed out. The purpose of this study is to evaluate the relationship between some of the anthropometric indicators and blood lipid levels in adolescents. The present study has been conducted on a total of 252 adolescents -200 girls and 52 boys- within an age group of 12 to 18 years. Blood was drawn from each participant in the morning -after having fasted for 10 hours from the day before- to analyze their total cholesterol, HDL, LDL and triglyceride levels. Their body weight, height, waist circumference, subscapular skinfold thicknesses and triceps skinfold thicknesses measurements were taken and their individual waist/height ratios, BMI and body fat ratios were calculated. The blood lipid levels of the participants were categorized as acceptable, borderline and high in accordance with the 2011 Expert Panel Integrated Guidelines. The body fat ratios, total blood cholesterol and HDL levels of the girls were significantly higher than the boys whereas their waist circumference values were lower. The triglyceride levels, total cholesterol/HDL, LDL/HDL, triglyceride/HDL ratios of the group with the BMI ≥ 95 percentile ratio (the obese group) were higher than the groups that were considered to be overweight and normal weight as per their respective BMI values, while the HDL level of the obese group was lower; a fact that was found to be statistically significant. No significant relationship could be established, however, between the total blood cholesterol and LDL levels with their anthropometric measurements. The BMI, waist circumference, waist/height ratio, body fat ratio and triglyceride level of the group with the higher triglyceride level ( ≥ 130mg/dl) were found to be significantly higher compared to borderline (90-129 mg/dl) and the normal group (< 90 mg/dl). The BMI, waist circumference, waist/height ratio values of the group with the lower HDL level ( < 40 mg/dl) were significantly higher than the normal ( > 45 mg/dl) and borderline (40-45 mg/dl) groups. All of the anthropometric measurements of the group with the higher triglyceride/HDL ratio ( ≥ 3) were found to be significantly higher than that of the group with the lower ratio (< 3). Having a high BMI, waist/height ratio and waist circumference is related to low HDL and high blood triglyceride and triglyceride/HDL ratio. A high body fat ratio, on the other hand, is associated with a low HDL and high triglyceride/HDL ratio. Tackling childhood and adolescent obesity are important in terms of preventing cardiovascular diseases.Keywords: adolescent, body fat, body mass index, lipid profile
Procedia PDF Downloads 263642 Diplomacy in Times of Disaster: Management through Reputational Capital
Authors: Liza Ireni-Saban
Abstract:
The 6.6 magnitude quake event that occurred in 2003 (Bam, Iran) made it impossible for the Iranian government to handle disaster relief efforts domestically. In this extreme event, the Iranian government reached out to the international community, and this created a momentum that had to be carried out by trust-building efforts on all sides, often termed ‘Disaster Diplomacy’. Indeed, the circumstances were even more critical when one considers the increasing political and economic isolation of Iran within the international community. The potential for transformative political space to be opened by disaster has been recognized by dominant international political actors. Despite the fact that Bam 2003 post-disaster relief efforts did not catalyze any diplomatic activities on all sides, it is suggested that few international aid agencies have successfully used disaster recovery to enhance their popular legitimacy and reputation among the international community. In terms of disaster diplomacy, an actor’s reputational capital may affect his ability to build coalitions and alliances to achieve international political ends, to negotiate and build understanding and trust with foreign publics. This study suggests that the post-disaster setting may benefit from using the ecology of games framework to evaluate the role of bridging actors and mediators in facilitating collaborative governance networks. Recent developments in network theory and analysis provide means of structural embeddedness to explore how reputational capital can be built through brokerage roles of actors engaged in a disaster management network. This paper then aims to structure the relations among actors that participated in the post-disaster relief efforts in the 2003 Bam earthquake (Iran) in order to assess under which conditions actors may be strategically utilized to serve as mediating organizations for future disaster events experienced by isolated nations or nations in conflict. The results indicate the strategic use of reputational capital by the Iranian Ministry of Foreign Affairs as key broker to build a successful coordinative system for reducing disaster vulnerabilities. International aid agencies rarely played brokerage roles to coordinate peripheral actors. U.S. foreign assistance (USAID), despite coordination capacities, was prevented from serving brokerage roles in the system.Keywords: coordination, disaster diplomacy, international aid organizations, Iran
Procedia PDF Downloads 154641 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 195640 The Creation of Calcium Phosphate Coating on Nitinol Substrate
Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova
Abstract:
NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.
Procedia PDF Downloads 72639 Identifying and Understand Pragmatic Failures in Portuguese Foreign Language by Chinese Learners in Macau
Authors: Carla Lopes
Abstract:
It is clear nowadays that the proper performance of different speech acts is one of the most difficult obstacles that a foreign language learner has to overcome to be considered communicatively competent. This communication presents the results of an investigation on the pragmatic performance of Portuguese Language students at the University of Macau. The research discussed herein is based on a survey consisting of fourteen speaking situations to which the participants must respond in writing, and that includes different types of speech acts: apology, response to a compliment, refusal, complaint, disagreement and the understanding of the illocutionary force of indirect speech acts. The responses were classified in a five levels Likert scale (quantified from 1 to 5) according to their suitability for the particular situation. In general terms, we can summarize that about 45% of the respondents' answers were pragmatically competent, 10 % were acceptable and 45 % showed weaknesses at socio-pragmatic competence level. Given that the linguistic deviations were not taken into account, we can conclude that the faults are of cultural origin. It is natural that in the presence of orthogonal cultures, such as Chinese and Portuguese, there are failures of this type, barely solved in the four years of the undergraduate program. The target population, native speakers of Cantonese or Mandarin, make their first contact with the English language before joining the Bachelor of Portuguese Language. An analysis of the socio - pragmatic failures in the respondents’ answers suggests the conclusion that many of them are due to the lack of cultural knowledge. They try to compensate for this either using their native culture or resorting to a Western culture that they consider close to the Portuguese, that is the English or US culture, previously studied, and also widely present in the media and on the internet. This phenomenon, known as 'pragmatic transfer', can result in a linguistic behavior that may be considered inauthentic or pragmatically awkward. The resulting speech act is grammatically correct but is not pragmatically feasible, since it is not suitable to the culture of the target language, either because it does not exist or because the conditions of its use are in fact different. Analysis of the responses also supports the conclusion that these students present large deviations from the expected and stereotyped behavior of Chinese students. We can speculate while this linguistic behavior is the consequence of the Macao globalization that culturally casts the students, makes them more open, and distinguishes them from the typical Chinese students.Keywords: Portuguese foreign language, pragmatic failures, pragmatic transfer, pragmatic competence
Procedia PDF Downloads 210638 Enhancing the Aussie Optimism Positive Thinking Skills Program: Short-term Effects on Anxiety and Depression in Youth aged 9-11 Years Old
Authors: Rosanna M. Rooney, Sharinaz Hassan, Maryanne McDevitt, Jacob D. Peckover, Robert T. Kane
Abstract:
Anxiety and depression are the most common mental health problems experienced by Australian children and adolescents. Research into youth mental health points to the importance of considering emotional competence, parental influence on the child’s emotional development, and the fact that cognitions are still developing in childhood when designing and implementing positive psychology interventions. Additionally, research into such interventions has suggested the inclusion of a coaching component aimed at supporting those implementing the intervention enhances the effects of the intervention itself. In light of these findings and given the burden of anxiety and depression in the longer term, it is necessary to enhance the Aussie Optimism Positive Thinking Skills program and evaluate its efficacy in terms of children’s mental health outcomes. It was expected that the enhancement of the emotional and cognitive aspects of the Aussie Optimism Positive Thinking Skills program, the addition of coaching, and the inclusion of a parent manual would lead to significant prevention effects in internalizing problems at post-test, 6- and 18-months after the completion of the intervention. 502 students (9-11 years old) were randomly assigned to the intervention group (n = 347) or control group (n = 155). At each time point (baseline, post-test, 6-month follow-up, and 18-month follow-up), students completed a battery of self-report measures. The ten intervention sessions making up the enhanced Aussie Optimism Positive Thinking Skills program were run weekly. At post-test and 6-month follow-up, the intervention group reported significantly lower depression than the control group, with no group differences at the 18-month follow-up. The intervention group reported significantly lower anxiety than the control group only at the 6-month follow-up, with no group differences in the post-test or at the 18-month follow-up. Results suggest that the enhanced Aussie Optimism Positive Thinking Skills program can reduce depressive and anxious symptoms in the short term and highlight the importance of universally implemented positive psychology interventions.Keywords: positive psychology, emotional competence, internalizing symptoms, universal implementation
Procedia PDF Downloads 68637 A Study on the Chemical Composition of Kolkheti's Sphagnum Peat Peloids to Evaluate the Perspective of Use in Medical Practice
Authors: Al. Tsertsvadze. L. Ebralidze, I. Matchutadze. D. Berashvili, A. Bakuridze
Abstract:
Peatlands are landscape elements, they are formed over a very long period by physical, chemical, biologic, and geologic processes. In the moderate zone of Caucasus, the Kolkheti lowlands are distinguished by the diversity of relictual plants, a high degree of endemism, orographic, climate, landscape, and other characteristics of high levels of biodiversity. The unique properties of the Kolkheti region lead to the formation of special, so-called, endemic peat peloids. The composition and properties of peloids strongly depend on peat-forming plants. Peat is considered a unique complex of raw materials, which can be used in different fields of the industry: agriculture, metallurgy, energy, biotechnology, chemical industry, health care. They are formed in permanent wetland areas. As a result of decay, higher plants remain in the anaerobic area, with the participation of microorganisms. Peat mass absorbs soil and groundwater. Peloids are predominantly rich with humic substances, which are characterized by high biological activity. Humic acids stimulate enzymatic activity, regenerative processes, and have anti-inflammatory activity. Objects of the research were Kolkheti peat peloids (Ispani, Anaklia, Churia, Chirukhi, Peranga) possessing different formation phases. Due to specific physical and chemical properties of research objects, the aim of the research was to develop analytical methods in order to study the chemical composition of the objects. The research was held using modern instrumental methods of analysis: Ultraviolet-visible spectroscopy and Infrared spectroscopy, Scanning Electron Microscopy, Centrifuge, dry oven, Ultraturax, pH meter, fluorescence spectrometer, Gas chromatography-mass spectrometry (GC-MS/MS), Gas chromatography. Based on the research ration between organic and inorganic substances, the spectrum of micro and macro elements, also the content of minerals was determined. The content of organic nitrogen was determined using the Kjeldahl method. The total composition of amino acids was studied by a spectrophotometric method using standard solutions of glutamic and aspartic acids. Fatty acid was determined using GC (Gas chromatography). Based on the obtained results, we can conclude that the method is valid to identify fatty acids in the research objects. The content of organic substances in the research objects was held using GC-MS. Using modern instrumental methods of analysis, the chemical composition of research objects was studied. Each research object is predominantly reached with a broad spectrum of organic (fatty acids, amino acids, carbocyclic and heterocyclic compounds, organic acids and their esters, steroids) and inorganic (micro and macro elements, minerals) substances. Modified methods used in the presented research may be utilized for the evaluation of cosmetological balneological and pharmaceutical means prepared on the base of Kolkheti's Sphagnum Peat Peloids.Keywords: modern analytical methods, natural resources, peat, chemistry
Procedia PDF Downloads 127636 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95635 Entropy in a Field of Emergence in an Aspect of Linguo-Culture
Authors: Nurvadi Albekov
Abstract:
Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.Keywords: communicative situation, field of emergence, lingua-culture, entropy
Procedia PDF Downloads 362634 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 160633 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 375632 Assessment of the Impact of Atmospheric Air, Drinking Water and Socio-Economic Indicators on the Primary Incidence of Children in Altai Krai
Authors: A. P. Pashkov
Abstract:
The number of environmental factors that adversely affect children's health is growing every year; their combination in each territory is different. The contribution of socio-economic factors to the health status of the younger generation is increasing. It is the child’s body that is most sensitive to changes in environmental conditions, responding to this with a deterioration in health. Over the past years, scientists have determined the influence of environmental factors and the incidence of children. Currently, there is a tendency to study regional characteristics of the interaction of a combination of environmental factors with the child's body. The aim of the work was to identify trends in the primary non-infectious morbidity of the children of the Altai Territory as a unique region that combines territories with different levels of environmental quality indicators, as well as to assess the effect of atmospheric air, drinking water and socio-economic indicators on the incidence of children in the region. An unfavorable tendency has been revealed in the region for incidence of such nosological groups as neoplasms, including malignant ones, diseases of the endocrine system, including obesity and thyroid disease, diseases of the circulatory system, digestive diseases, diseases of the genitourinary system, congenital anomalies, and respiratory diseases. Between some groups of diseases revealed a pattern of geographical distribution during mapping and a significant correlation. Some nosologies have a relationship with socio-economic indicators for an integrated assessment: circulatory system diseases, respiratory diseases (direct connection), endocrine system diseases, eating disorders, and metabolic disorders (feedback). The analysis of associations of the incidence of children with average annual concentrations of substances that pollute the air and drinking water showed the existence of reliable correlation in areas of critical and intense degree of environmental quality. This fact confirms that the population living in contaminated areas is subject to the negative influence of environmental factors, which immediately affects the health status of children. The results obtained indicate the need for a detailed assessment of the influence of environmental factors on the incidence of children in the regional aspect, the formation of a database, and the development of automated programs that can predict the incidence in each specific territory. This will increase the effectiveness, including economic of preventive measures.Keywords: incidence of children, regional features, socio-economic factors, environmental factors
Procedia PDF Downloads 115631 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada
Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman
Abstract:
Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.Keywords: HAND, DTM, rapid floodplain, simplified conceptual models
Procedia PDF Downloads 151630 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process
Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian
Abstract:
The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass
Procedia PDF Downloads 410629 Using Hemicellulosic Liquor from Sugarcane Bagasse to Produce Second Generation Lactic Acid
Authors: Regiane A. Oliveira, Carlos E. Vaz Rossell, Rubens Maciel Filho
Abstract:
Lactic acid, besides a valuable chemical may be considered a platform for other chemicals. In fact, the feasibility of hemicellulosic sugars as feedstock for lactic acid production process, may represent the drop of some of the barriers for the second generation bioproducts, especially bearing in mind the 5-carbon sugars from the pre-treatment of sugarcane bagasse. Bearing this in mind, the purpose of this study was to use the hemicellulosic liquor from sugarcane bagasse as a substrate to produce lactic acid by fermentation. To release of sugars from hemicellulose it was made a pre-treatment with a diluted sulfuric acid in order to obtain a xylose's rich liquor with low concentration of inhibiting compounds for fermentation (≈ 67% of xylose, ≈ 21% of glucose, ≈ 10% of cellobiose and arabinose, and around 1% of inhibiting compounds as furfural, hydroxymethilfurfural and acetic acid). The hemicellulosic sugars associated with 20 g/L of yeast extract were used in a fermentation process with Lactobacillus plantarum to produce lactic acid. The fermentation process pH was controlled with automatic injection of Ca(OH)2 to keep pH at 6.00. The lactic acid concentration remained stable from the time when the glucose was depleted (48 hours of fermentation), with no further production. While lactic acid is produced occurs the concomitant consumption of xylose and glucose. The yield of fermentation was 0.933 g lactic acid /g sugars. Besides, it was not detected the presence of by-products, what allows considering that the microorganism uses a homolactic fermentation to produce its own energy using pentose-phosphate pathway. Through facultative heterofermentative metabolism the bacteria consume pentose, as is the case of L. plantarum, but the energy efficiency for the cell is lower than during the hexose consumption. This implies both in a slower cell growth, as in a reduction in lactic acid productivity compared with the use of hexose. Also, L. plantarum had shown to have a capacity for lactic acid production from hemicellulosic hydrolysate without detoxification, which is very attractive in terms of robustness for an industrial process. Xylose from hydrolyzed bagasse and without detoxification is consumed, although the hydrolyzed bagasse inhibitors (especially aromatic inhibitors) affect productivity and yield of lactic acid. The use of sugars and the lack of need for detoxification of the C5 liquor from sugarcane bagasse hydrolyzed is a crucial factor for the economic viability of second generation processes. Taking this information into account, the production of second generation lactic acid using sugars from hemicellulose appears to be a good alternative to the complete utilization of sugarcane plant, directing molasses and cellulosic carbohydrates to produce 2G-ethanol, and hemicellulosic carbohydrates to produce 2G-lactic acid.Keywords: fermentation, lactic acid, hemicellulosic sugars, sugarcane
Procedia PDF Downloads 373