Search results for: equivalent transformation algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4457

Search results for: equivalent transformation algorithms

497 The Relations between Language Diversity and Similarity and Adults' Collaborative Creative Problem Solving

Authors: Z. M. T. Lim, W. Q. Yow

Abstract:

Diversity in individual problem-solving approaches, culture and nationality have been shown to have positive effects on collaborative creative processes in organizational and scholastic settings. For example, diverse graduate and organizational teams consisting of members with both structured and unstructured problem-solving styles were found to have more creative ideas on a collaborative idea generation task than teams that comprised solely of members with either structured or unstructured problem-solving styles. However, being different may not always provide benefits to the collaborative creative process. In particular, speaking different languages may hinder mutual engagement through impaired communication and thus collaboration. Instead, sharing similar languages may have facilitative effects on mutual engagement in collaborative tasks. However, no studies have explored the relations between language diversity and adults’ collaborative creative problem solving. Sixty-four Singaporean English-speaking bilingual undergraduates were paired up into similar or dissimilar language pairs based on the second language they spoke (e.g., for similar language pairs, both participants spoke English-Mandarin; for dissimilar language pairs, one participant spoke English-Mandarin and the other spoke English-Korean). Each participant completed the Ravens Progressive Matrices Task individually. Next, they worked in pairs to complete a collaborative divergent thinking task where they used mind-mapping techniques to brainstorm ideas on a given problem together (e.g., how to keep insects out of the house). Lastly, the pairs worked on a collaborative insight problem-solving task (Triangle of Coins puzzle) where they needed to flip a triangle of ten coins around by moving only three coins. Pairs who had prior knowledge of the Triangle of Coins puzzle were asked to complete an equivalent Matchstick task instead, where they needed to make seven squares by moving only two matchsticks based on a given array of matchsticks. Results showed that, after controlling for intelligence, similar language pairs completed the collaborative insight problem-solving task faster than dissimilar language pairs. Intelligence also moderated these relations. Among adults of lower intelligence, similar language pairs solved the insight problem-solving task faster than dissimilar language pairs. These differences in speed were not found in adults with higher intelligence. No differences were found in the number of ideas generated in the collaborative divergent thinking task between similar language and dissimilar language pairs. In conclusion, sharing similar languages seem to enrich collaborative creative processes. These effects were especially pertinent to pairs with lower intelligence. This provides guidelines for the formation of groups based on shared languages in collaborative creative processes. However, the positive effects of shared languages appear to be limited to the insight problem-solving task and not the divergent thinking task. This could be due to the facilitative effects of other factors of diversity as found in previous literature. Background diversity, for example, may have a larger facilitative effect on the divergent thinking task as compared to the insight problem-solving task due to the varied experiences individuals bring to the task. In conclusion, this study contributes to the understanding of the effects of language diversity in collaborative creative processes and challenges the general positive effects that diversity has on these processes.

Keywords: bilingualism, diversity, creativity, collaboration

Procedia PDF Downloads 312
496 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction

Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack

Abstract:

We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.

Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization

Procedia PDF Downloads 102
495 Characterization of a Lipolytic Enzyme of Pseudomonas nitroreducens Isolated from Mealworm's Gut

Authors: Jung-En Kuan, Whei-Fen Wu

Abstract:

In this study, a symbiotic bacteria from yellow mealworm's (Tenebrio molitor) mid-gut was isolated with characteristics of growth on minimal-tributyrin medium. After a PCR-amplification of its 16s rDNA, the resultant nucleotide sequences were then analyzed by schemes of the phylogeny trees. Accordingly, it was designated as Pseudomonas nitroreducens D-01. Next, by searching the lipolytic enzymes in its protein data bank, one of those potential lipolytic α/β hydrolases was identified, again using PCR-amplification and nucleotide-sequencing methods. To construct an expression of this lipolytic gene in plasmids, the target-gene primers were then designed, carrying the C-terminal his-tag sequences. Using the vector pET21a, a recombinant lipolytic hydrolase D gene with his-tag nucleotides was successfully cloned into it, of which the lipolytic D gene is under a control of the T7 promoter. After transformation of the resultant plasmids into Eescherichia coli BL21 (DE3), an IPTG inducer was used for the induction of the recombinant proteins. The protein products were then purified by metal-ion affinity column, and the purified proteins were found capable of forming a clear zone on tributyrin agar plate. Shortly, its enzyme activities were determined by degradation of p-nitrophenyl ester(s), and the substantial yellow end-product, p-nitrophenol, was measured at O.D.405 nm. Specifically, this lipolytic enzyme efficiently targets p-nitrophenyl butyrate. As well, it shows the most reactive activities at 40°C, pH 8 in potassium phosphate buffer. In thermal stability assays, the activities of this enzyme dramatically drop when the temperature is above 50°C. In metal ion assays, MgCl₂ and NH₄Cl induce the enzyme activities while MnSO₄, NiSO₄, CaCl₂, ZnSO₄, CoCl₂, CuSO₄, FeSO₄, and FeCl₃ reduce its activities. Besides, NaCl has no effects on its enzyme activities. Most organic solvents decrease the activities of this enzyme, such as hexane, methanol, ethanol, acetone, isopropanol, chloroform, and ethyl acetate. However, its enzyme activities increase when DMSO exists. All the surfactants like Triton X-100, Tween 80, Tween 20, and Brij35 decrease its lipolytic activities. Using Lineweaver-Burk double reciprocal methods, the function of the enzyme kinetics were determined such as Km = 0.488 (mM), Vmax = 0.0644 (mM/min), and kcat = 3.01x10³ (s⁻¹), as well the total efficiency of kcat/Km is 6.17 x10³ (mM⁻¹/s⁻¹). Afterwards, based on the phylogenetic analyses, this lipolytic protein is classified to type IV lipase by its homologous conserved region in this lipase family.

Keywords: enzyme, esterase, lipotic hydrolase, type IV

Procedia PDF Downloads 130
494 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 73
493 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 479
492 (Mis) Communication across the Borders: Politics, Media, and Public Opinion in Turkey

Authors: Banu Baybars Hawks

Abstract:

To date, academic attention in social sciences remains inadequate with regard to research and analysis of public opinion in Turkey. Most of the existing research has assessed the public opinion during political election periods. Therefore, it is of great interest to find out what the public thinks about current issues in Turkey, and how to interpret the results to be able to reveal whether they may have any reflections on social, political, and cultural structure of the country. Accordingly, the current study seeks to fill the gap in the social sciences literature in English regarding Turkey’s social and political stand which may be perceived to be very different by other nations. Without timely feedback from public surveys, various programs for improving different services and institutions functioning in the country might not achieve their expected goal, nor can decisions about which programs to implement be made rationally. Additionally, the information gathered may not only yield important insights into public’s opinion regarding current agenda in Turkey, but also into the correlates shaping public policies. Agenda-setting studies including agenda-building, agenda melding, reversed agenda-setting and information diffusion studies will be used to explain the roles of factors and actors in the formation of public opinion in Turkey. Knowing the importance of public agenda in the agenda setting and building process, this paper aims to reveal the social and political tendencies of the Turkish public. For that purpose, a survey will be carried out in December of 2014 to determine the social and political trends in Turkey for that same year. The subjects for the study, which utilize a questionairre in one-on-one interviews, will include 1,000 individuals aged 18 years and older from 26 cities representing general population. A stratified random sampling frame will be used. The topics covered by the survey include: The most important current problem in Turkey; the Economy; Terror; Approaches to the Kurdish Issue; Evaluations of the Government and Opposition Parties; Evaluations of Institutional Efficiency; Foreign Policy; the Judicial System/Constitution; Democracy and the Media; and, Social Relations/Life in Turkey. Since the beginning of the 21st century, Turkey has been undergoing a rapid transformation. The reflections of the changes can be seen in all areas from economics to politics. It is my hope that findings of this study may shed light on the important aspects of institutions, variables setting the agenda, and formation process of public opinion in Turkey.

Keywords: public opinion, media, agenda setting, information diffusion, government, freedom, Turkey

Procedia PDF Downloads 462
491 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 164
490 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification

Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho

Abstract:

The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.

Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers

Procedia PDF Downloads 3
489 Gene Expression and Staining Agents: Exploring the Factors That Influence the Electrophoretic Properties of Fluorescent Proteins

Authors: Elif Tugce Aksun Tumerkan, Chris Lowe, Hannah Krupa

Abstract:

Fluorescent proteins are self-sufficient in forming chromophores with a visible wavelength from 3 amino acids sequence within their own polypeptide structure. This chromophore – a molecule that absorbs a photon of light and exhibits an energy transition equal to the energy of the absorbed photon. Fluorescent proteins (FPs) consisted of a chain of 238 amino acid residues and composed of 11 beta strands shaped in a cylinder surrounding an alpha helix structure. A better understanding of the system of the chromospheres and the increasing advance in protein engineering in recent years, the properties of FPs offers the potential for new applications. They have used sensors and probes in molecular biology and cell-based research that giving a chance to observe these FPs tagged cell localization, structural variation and movement. For clarifying functional uses of fluorescent proteins, electrophoretic properties of these proteins are one of the most important parameters. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis is used for determining electrophoretic properties commonly. While there are many techniques are used for determining the functionality of protein-based research, SDS-PAGE analysis can only provide a molecular level assessment of the proteolytic fragments. Before SDS-PAGE analysis, fluorescent proteins need to successfully purified. Due to directly purification of the target, FPs is difficult from the animal, gene expression is commonly used which must be done by transformation with the plasmid. Furthermore, used gel within electrophoresis and staining agents properties have a key role. In this review, the different factors that have the impact on the electrophoretic properties of fluorescent proteins explored. Fluorescent protein separation and purification are the essential steps before electrophoresis that should be done very carefully. For protein purification, gene expression process and following steps have a significant function. For successful gene expression, the properties of selected bacteria for expression, used plasmid are essential. Each bacteria has own characteristics which are very sensitive to gene expression, also used procedure is the important factor for fluorescent protein expression. Another important factors are gel formula and used staining agents. Gel formula has an effect on the specific proteins mobilization and staining with correct agents is a key step for visualization of electrophoretic bands of protein. Visuality of proteins can be changed depending on staining reagents. Apparently, this review has emphasized that gene expression and purification have a stronger effect than electrophoresis protocol and staining agents.

Keywords: cell biology, gene expression, staining agents, SDS-page

Procedia PDF Downloads 189
488 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy

Authors: M. Regina Carreira-Lopez

Abstract:

Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.

Keywords: hypernymy, information retrieval, lightweight ontology, resonance

Procedia PDF Downloads 122
487 “Japan’s New Security Outlook: Implications for the US-Japan Alliance”

Authors: Agustin Maciel-Padilla

Abstract:

This paper explores the most significant change to Japan’s security strategy since the end of World War II, in particular Prime Minister Fumio Kishida’s government publication, in late 2022, of 3 policy documents (the National Security Strategy [NSS], the National Defense Strategy and the Defense Buildup Program) that basically propose to expand the country’s military capabilities and to increase military spending over a 5-year period. These policies represent a remarkable transformation of Japan’s defense-oriented policy followed since 1946. These proposals have been under analysis and debate since they were announced, as it was also Japan’s historic ambition to strengthening its deterrence capabilities in the context of a more complex regional security environment. Even though this new defense posture has attracted significant international attention, it is far from representing a done deal because of the fact that there is still a long way to go to implement this vision because of a wide variety of political and economic issues. Japan is currently experiencing the most dangerous security environment since the end of World War II, and this situation led Japan to intensify its dialogue with the United States to reflect a re-evaluation of deterrence in the face of a rapidly worsening security environment, a changing balance of power in East Asia, and the arrival of a new era of “great power competition”. Japan’s new documents, for instance, identify China and North Korea’s as posing, respectively, a strategic challenge and an imminent threat. Japan has also noted that Russia’s invasion of Ukraine has contributed to erode the foundation of the international order. It is considered that Russia’s aggression was possible because Ukraine’s defense capability was not enough for effective deterrence. Moreover, Japan’s call for “counterstrike capabilities” results from a recognition that China and North Korea’s ballistic and cruise missiles could overwhelm Japan’s air and missile defense systems, and therefore there is an urgent need to strengthen deterrence and resilience. In this context, this paper will focus on the impact of these changes on the US-Japan alliance. Adapting this alliance to Tokyo’s new ambitions and capabilities could be critical in terms of updating their traditional protection/access to bases arrangement, interoperability and joint command and control issues, as well as regarding the security–economy nexus. While China is Japan’s largest trading partner, and trade between the two has been growing, US-Japan economic relationship has been slower, notwithstanding the fact that US-Japan security cooperation has strengthened significantly in recent years.

Keywords: us-japan alliance, japan security, great power competition, interoperability

Procedia PDF Downloads 60
486 The Phenomenology in the Music of Debussy through Inspiration of Western and Oriental Culture

Authors: Yu-Shun Elisa Pong

Abstract:

Music aesthetics related to phenomenology is rarely discussed and still in the ascendant while multi-dimensional discourses of philosophy were emerged to be an important trend in the 20th century. In the present study, a basic theory of phenomenology from Edmund Husserl (1859-1938) is revealed and discussed followed by the introduction of intentionality concepts, eidetic reduction, horizon, world, and inter-subjectivity issues. Further, phenomenology of music and general art was brought to attention by the introduction of Roman Ingarden’s The Work of Music and the Problems of its Identity (1933) and Mikel Dufrenne’s The Phenomenology of Aesthetic Experience (1953). Finally, Debussy’s music will be analyzed and discussed from the perspective of phenomenology. Phenomenology is not so much a methodology or analytics rather than a common belief. That is, as much as possible to describe in detail the different human experience, relative to the object of purpose. Such idea has been practiced in various guises for centuries, only till the early 20th century Phenomenology was better refined through the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others. Debussy was born in an age when the Western society began to accept the multi-cultural baptism. With his unusual sensitivity to the oriental culture, Debussy has presented considerable inspiration, absorption, and echo in his music works. In fact, his relationship with nature is far from echoing the idea of Chinese ancient literati and nature. Although he is not the first composer to associate music with human and nature, the unique quality and impact of his works enable him to become a significant figure in music aesthetics. Debussy’s music tried to develop a quality analogous of nature, and more importantly, based on vivid life experience and artistic transformation to achieve the realm of pure art. Such idea that life experience comes before artwork, either clear or vague, simple or complex, was later presented abstractly in his late works is still an interesting subject worth further discussion. Debussy’s music has existed for more than or close to a century. It has received musicology researcher’s attention as much as other important works in the history of Western music. Among the pluralistic discussion about Debussy’s art and ideas, phenomenological aesthetics has enlightened new ideas and view angles to relook his great works and even gave some previous arguments legitimacy. Overall, this article provides a new insight of Debussy’s music from phenomenological exploration and it is believed phenomenology would be an important pathway in the research of the music aesthetics.

Keywords: Debussy's music, music esthetics, oriental culture, phenomenology

Procedia PDF Downloads 268
485 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly

Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni

Abstract:

The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.

Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype

Procedia PDF Downloads 132
484 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 140
483 Legal Issues of Collecting and Processing Big Health Data in the Light of European Regulation 679/2016

Authors: Ioannis Iglezakis, Theodoros D. Trokanas, Panagiota Kiortsi

Abstract:

This paper aims to explore major legal issues arising from the collection and processing of Health Big Data in the light of the new European secondary legislation for the protection of personal data of natural persons, placing emphasis on the General Data Protection Regulation 679/2016. Whether Big Health Data can be characterised as ‘personal data’ or not is really the crux of the matter. The legal ambiguity is compounded by the fact that, even though the processing of Big Health Data is premised on the de-identification of the data subject, the possibility of a combination of Big Health Data with other data circulating freely on the web or from other data files cannot be excluded. Another key point is that the application of some provisions of GPDR to Big Health Data may both absolve the data controller of his legal obligations and deprive the data subject of his rights (e.g., the right to be informed), ultimately undermining the fundamental right to the protection of personal data of natural persons. Moreover, data subject’s rights (e.g., the right not to be subject to a decision based solely on automated processing) are heavily impacted by the use of AI, algorithms, and technologies that reclaim health data for further use, resulting in sometimes ambiguous results that have a substantial impact on individuals. On the other hand, as the COVID-19 pandemic has revealed, Big Data analytics can offer crucial sources of information. In this respect, this paper identifies and systematises the legal provisions concerned, offering interpretative solutions that tackle dangers concerning data subject’s rights while embracing the opportunities that Big Health Data has to offer. In addition, particular attention is attached to the scope of ‘consent’ as a legal basis in the collection and processing of Big Health Data, as the application of data analytics in Big Health Data signals the construction of new data and subject’s profiles. Finally, the paper addresses the knotty problem of role assignment (i.e., distinguishing between controller and processor/joint controllers and joint processors) in an era of extensive Big Health data sharing. The findings are the fruit of a current research project conducted by a three-member research team at the Faculty of Law of the Aristotle University of Thessaloniki and funded by the Greek Ministry of Education and Religious Affairs.

Keywords: big health data, data subject rights, GDPR, pandemic

Procedia PDF Downloads 124
482 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 77
481 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 66
480 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports

Authors: Jonardan Koner, Avinash Purandare

Abstract:

In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.

Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders

Procedia PDF Downloads 130
479 An Advanced Automated Brain Tumor Diagnostics Approach

Authors: Berkan Ural, Arif Eser, Sinan Apaydin

Abstract:

Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.

Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition

Procedia PDF Downloads 413
478 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 33
477 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 206
476 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 116
475 Improvising Grid Interconnection Capabilities through Implementation of Power Electronics

Authors: Ashhar Ahmed Shaikh, Ayush Tandon

Abstract:

The swift reduction of fossil fuels from nature has crucial need for alternative energy sources to cater vital demand. It is essential to boost alternative energy sources to cover the continuously increasing demand for energy while minimizing the negative environmental impacts. Solar energy is one of the reliable sources that can generate energy. Solar energy is freely available in nature and is completely eco-friendly, and they are considered as the most promising power generating sources due to their easy availability and other advantages for the local power generation. This paper is to review the implementation of power electronic devices through Solar Energy Grid Integration System (SEGIS) to increase the efficiency. This paper will also concentrate on the future grid infrastructure and various other applications in order to make the grid smart. Development and implementation of a power electronic devices such as PV inverters and power controllers play an important role in power supply in the modern energy economy. Solar Energy Grid Integration System (SEGIS) opens pathways for promising solutions for new electronic and electrical components such as advanced innovative inverter/controller topologies and their functions, economical energy management systems, innovative energy storage systems with equipped advanced control algorithms, advanced maximum-power-point tracking (MPPT) suited for all PV technologies, protocols and the associated communications. In addition to advanced grid interconnection capabilities and features, the new hardware design results in small size, less maintenance, and higher reliability. The SEGIS systems will make the 'advanced integrated system' and 'smart grid' evolutionary processes to run in a better way. Since the last few years, there was a major development in the field of power electronics which led to more efficient systems and reduction of the cost per Kilo-watt. The inverters became more efficient and had reached efficiencies in excess of 98%, and commercial solar modules have reached almost 21% efficiency.

Keywords: solar energy grid integration systems, smart grid, advanced integrated system, power electronics

Procedia PDF Downloads 180
474 The Key Role of a Bystander Improving the Effectiveness of Cardiopulmonary Resuscitation Performed in Extra-Urban Areas

Authors: Leszek Szpakowski, Daniel Celiński, Sławomir Pilip, Grzegorz Michalak

Abstract:

The aim of the study was to analyse the usefulness of the 'E-rescuer' pilot project planned to be implemented in a chosen area of Eastern Poland in the cases of suspected sudden cardiac arrests in the extra-urban areas. Inventing an application allowing to dispatch simultaneously both Medical Emergency Teams and the E-rescuer to the place of the accident is the crucial assumption of the mentioned pilot project. The E-rescuer is defined to be the trained person able to take effective basic life support and to use automated external defibrillator. Having logged in using a smartphone, the E-rescuer's readiness is reported online to provide cardiopulmonary resuscitation exactly at the given location. Due to the accurately defined location of the E-rescuer, his arrival time is possible to be precisely fixed, and the substantive support through the displayed algorithms is capable of being provided as well. Having analysed the medical records in the years 2015-2016, cardiopulmonary resuscitation was considered to be effective when an early indication of circulation was provided, and the patient was taken to hospital. In the mentioned term, there were 2.291 cases of a sudden cardiac arrest. Cardiopulmonary resuscitation was taken in 621 patients in total including 205 people in the urban area and 416 in the extra-urban areas. The effectiveness of cardiopulmonary resuscitation in the extra-urban areas was much lower (33,8%) than in the urban (50,7%). The average ambulance arrival time was respectively longer in the extra-urban areas, and it was 12,3 minutes while in the urban area 3,3 minutes. There was no significant difference in the average age of studied patients - 62,5 and 64,8 years old. However, the average ambulance arrival time was 7,6 minutes for effective resuscitations and 10,5 minutes for ineffective ones. Hence, the ambulance arrival time is a crucial factor influencing on the effectiveness of cardiopulmonary resuscitation, especially in the extra-urban areas where it is much longer than in the urban. The key role of trained E-rescuers being nearby taking basic life support before the ambulance arrival can effectively support Emergency Medical Services System in Poland.

Keywords: basic life support, bystander, effectiveness, resuscitation

Procedia PDF Downloads 201
473 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 87
472 Barriers for Appropriate Palliative Symptom Management: A Qualitative Research in Kazakhstan, a Medium-Income Transitional-Economy Country

Authors: Ibragim Issabekov, Byron Crape, Lyazzat Toleubekova

Abstract:

Background: Palliative care substantially improves the quality of life of terminally-ill patients. Symptom control is one of the keystones in the management of patients in palliative care settings, lowering distress as well as improving the quality of life of patients with end-stage diseases. The most common symptoms causing significant distress for patients are pain, nausea and vomiting, increased respiratory secretions and mental health issues like depression. Aims are: 1. to identify best practices in symptom management in palliative patients in accordance with internationally approved guidelines and compare aforementioned with actual practices in Kazakhstan; to evaluate the criteria for assessing symptoms in terminally-ill patients, 2. to review the availability and utilization of pharmaceutical agents for pain control, management of excessive respiratory secretions, nausea, and vomiting, and delirium and 3. to develop recommendations for the systematic approach to end-of-life symptom management in Kazakhstan. Methods: The use of qualitative research methods together with systematic literature review have been employed to provide a rigorous research process to evaluate current approaches for symptom management of palliative patients in Kazakhstan. Qualitative methods include in-depth semi-structured interviews of the healthcare professionals involved in palliative care provision. Results: Obstacles were found in appropriate provision of palliative care. Inadequate education and training to manage severe symptoms, poorly defined laws and regulations for palliative care provision, and a lack of algorithms and guidelines for care were major barriers in the effective provision of palliative care. Conclusion: Assessment of palliative care in this medium-income transitional-economy country is one of the first steps in the initiation of integration of palliative care into the existing health system. Achieving this requires identifying obstacles and resolving these issues.

Keywords: end-of-life care, middle income country, palliative care, symptom control

Procedia PDF Downloads 199
471 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 107
470 A Design Research Methodology for Light and Stretchable Electrical Thermal Warm-Up Sportswear to Enhance the Performance of Athletes against Harsh Environment

Authors: Chenxiao Yang, Li Li

Abstract:

In this decade, the sportswear market rapidly expanded while numerous sports brands are conducting fierce competitions to hold their market shares and trying to act as a leader in professional competition sports areas to set the trends. Thus, various advancing sports equipment is being deeply explored to improving athletes’ performance in fierce competitions. Although there is plenty protective equipment such as cuff, running legging, etc., on the market, there is still blank in the field of sportswear during prerace warm-up this important time gap, especially for those competitions host in cold environment. Because there is always time gaps between warm-up and race due to event logistics or unexpected weather factors. Athletes will be exposed to chilly condition for an unpredictable long period of time. As a consequence, the effects of warm-up will be negated, and the competition performance will be degraded. However, reviewing the current market, there is none effective sports equipment provided to help athletes against this harsh environment or the rare existing products are so blocky or heavy to restrict the actions. An ideal thermal-protective sportswear should be light, flexible, comfort and aesthetic at the same time. Therefore, this design research adopted the textile circular knitting methodology to integrate soft silver-coated conductive yarns (ab. SCCYs), elastic nylon yarn and polyester yarn to develop the proposed electrical, thermal sportswear, with the strengths aforementioned. Meanwhile, the relationship between heating performance, stretch load, and energy consumption were investigated. Further, a simulation model was established to ensure providing sufficient warm and flexibility at lower energy cost and with an optimized production, parameter determined. The proposed circular knitting technology and simulation model can be directly applied to instruct prototype developments to cater different target consumers’ needs and ensure prototypes’’ safety. On the other hand, high R&D investment and time consumption can be saved. Further, two prototypes: a kneecap and an elbow guard, were developed to facilitate the transformation of research technology into an industrial application and to give a hint on the blur future blueprint.

Keywords: cold environment, silver-coated conductive yarn, electrical thermal textile, stretchable

Procedia PDF Downloads 267
469 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: inner context, outer context, over-aged students, vicious cycle

Procedia PDF Downloads 197
468 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars

Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic

Abstract:

Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.

Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal

Procedia PDF Downloads 243