Search results for: m-convex function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4942

Search results for: m-convex function

592 Subjectivity in Miracle Aesthetic Clinic Ambient Media Advertisement

Authors: Wegig Muwonugroho

Abstract:

Subjectivity in advertisement is a ‘power’ possessed by advertisements to construct trend, concept, truth, and ideology through subconscious mind. Advertisements, in performing their functions as message conveyors, use such visual representation to inspire what’s ideal to the people. Ambient media is advertising medium making the best use of the environment where the advertisement is located. Miracle Aesthetic Clinic (Miracle) popularizes the visual representation of its ambient media advertisement through the omission of face-image of both female mannequins that function as its ambient media models. Usually, the face of a model in advertisement is an image commodity having selling values; however, the faces of ambient media models in Miracle advertisement campaign are suppressed over the table and wall. This face concealing aspect creates not only a paradox of subjectivity but also plurality of meaning. This research applies critical discourse analysis method to analyze subjectivity in obtaining the insight of ambient media’s meaning. First, in the stage of textual analysis, the embedding attributes upon female mannequins imply that the models are denoted as the representation of modern women, which are identical with the identities of their social milieus. The communication signs aimed to be constructed are the women who lose their subjectivities and ‘feel embarrassed’ to flaunt their faces to the public because of pimples on their faces. Second, in the stage of analysis of discourse practice, it points out that ambient media as communication media has been comprehensively responded by the targeted audiences. Ambient media has a role as an actor because of its eyes-catching setting, and taking space over the area where the public are wandering around. Indeed, when the public realize that the ambient media models are motionless -unlike human- stronger relation then appears, marked by several responses from targeted audiences. Third, in the stage of analysis of social practice, soap operas and celebrity gossip shows on the television become a dominant discourse influencing advertisement meaning. The subjectivity of Miracle Advertisement corners women by the absence of women participation in public space, the representation of women in isolation, and the portrayal of women as an anxious person in the social rank when their faces suffered from pimples. The Ambient media as the advertisement campaign of Miracle is quite success in constructing a new trend discourse of face beauty that is not limited on benchmarks of common beauty virtues, but the idea of beauty can be presented by ‘when woman doesn’t look good’ visualization.

Keywords: ambient media, advertisement, subjectivity, power

Procedia PDF Downloads 321
591 Soils Properties of Alfisols in the Nicoya Peninsula, Guanacaste, Costa Rica

Authors: Elena Listo, Miguel Marchamalo

Abstract:

This research studies the soil properties located in the watershed of Jabillo River in the Guanacaste province, Costa Rica. The soils are classified as Alfisols (T. Haplustalfs), in the flatter parts with grazing as Fluventic Haplustalfs or as a consequence of bad drainage as F. Epiaqualfs. The objective of this project is to define the status of the soil, to use remote sensing as a tool for analyzing the evolution of land use and determining the water balance of the watershed in order to improve the efficiency of the water collecting systems. Soil samples were analyzed from trial pits taken from secondary forests, degraded pastures, mature teak plantation, and regrowth -Tectona grandis L. F.- species developed favorably in the area. Furthermore, to complete the study, infiltration measurements were taken with an artificial rainfall simulator, as well as studies of soil compaction with a penetrometer, in points strategically selected from the different land uses. Regarding remote sensing, nearly 40 data samples were collected per plot of land. The source of radiation is reflected sunlight from the beam and the underside of leaves, bare soil, streams, roads and logs, and soil samples. Infiltration reached high levels. The majority of data came from the secondary forest and mature planting due to a high proportion of organic matter, relatively low bulk density, and high hydraulic conductivity. Teak regrowth had a low rate of infiltration because the studies made regarding the soil compaction showed a partial compaction over 50 cm. The secondary forest presented a compaction layer from 15 cm to 30 cm deep, and the degraded pasture, as a result of grazing, in the first 15 cm. In this area, the alfisols soils have high content of iron oxides, a fact that causes a higher reflectivity close to the infrared region of the electromagnetic spectrum (around 700mm), as a result of clay texture. Specifically in the teak plantation where the reflectivity reaches values of 90 %, this is due to the high content of clay in relation to others. In conclusion, the protective function of secondary forests is reaffirmed with regards to erosion and high rate of infiltration. In humid climates and permeable soils, the decrease of runoff is less, however, the percolation increases. The remote sensing indicates that being clay soils, they retain moisture in a better way and it means a low reflectivity despite being fine texture.

Keywords: alfisols, Costa Rica, infiltration, remote sensing

Procedia PDF Downloads 694
590 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 121
589 Governance Models of Higher Education Institutions

Authors: Zoran Barac, Maja Martinovic

Abstract:

Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.

Keywords: governance, governance models, higher education institutions, institutional context, situational context

Procedia PDF Downloads 335
588 Comparative Study of Static and Dynamic Representations of the Family Structure and Its Clinical Utility

Authors: Marietta Kékes Szabó

Abstract:

The patterns of personality (mal)function and the individuals’ psychosocial environment influence the healthy status collectively and may lie in the background of psychosomatic disorders. Although the patients with their diversified symptoms usually do not have any organic problems, the experienced complaint, the fear of serious illness and the lack of social support often lead to increased anxiety and further enigmatic symptoms. The role of the family system and its atmosphere seem to be very important in this process. More studies explored the characteristics of dysfunctional family organization: inflexible family structure, hidden conflicts that are not spoken about by the family members during their daily interactions, undefined role boundaries, neglect or overprotection of the children by the parents and coalition between generations. However, questionnaires that are used to measure the properties of the family system are able to explore only its unit and cannot pay attention to the dyadic interactions, while the representation of the family structure by a figure placing test gives us a new perspective to better understand the organization of the (sub)system(s). Furthermore, its dynamic form opens new perspectives to explore the family members’ joint representations, which gives us the opportunity to know more about the flexibility of cohesion and hierarchy of the given family system. In this way, the communication among the family members can be also examined. The aim of my study was to collect a great number of information about the organization of psychosomatic families. In our research we used Gehring’s Family System Test (FAST) both in static and dynamic forms to mobilize the family members’ mental representations about their family and to get data in connection with their individual representations as well as cooperation. There were four families in our study, all of them with a young adult person. Two families with healthy participants and two families with asthmatic patient(s) were involved in our research. The family members’ behavior that could be observed during the dynamic situation was recorded on video for further data analysis with Noldus Observer XT 8.0 program software. In accordance with the previous studies, our results show that the family structure of the families with at least one psychosomatic patient is more rigid than it was found in the control group and the certain (typical, ideal, and conflict) dynamic representations reflected mainly the most dominant family member’s individual concept. The behavior analysis also confirmed the intensified role of the dominant person(s) in the family life, thereby influencing the family decisions, the place of the other family members, as well as the atmosphere of the interactions, which could also be grasped well by the applied methods. However, further research is needed to learn more about the phenomenon that can open the door for new therapeutic approaches.

Keywords: psychosomatic families, family structure, family system test (FAST), static and dynamic representations, behavior analysis

Procedia PDF Downloads 390
587 Date Palm Fruits from Oman Attenuates Cognitive and Behavioral Defects and Reduces Inflammation in a Transgenic Mice Model of Alzheimer's Disease

Authors: M. M. Essa, S. Subash, M. Akbar, S. Al-Adawi, A. Al-Asmi, G. J. Guillemein

Abstract:

Transgenic (tg) mice which contain an amyloid precursor protein (APP) gene mutation, develop extracellular amyloid beta (Aβ) deposition in the brain, and severe memory and behavioral deficits with age. These mice serve as an important animal model for testing the efficacy of novel drug candidates for the treatment and management of symptoms of Alzheimer's disease (AD). Several reports have suggested that oxidative stress is the underlying cause of Aβ neurotoxicity in AD. Date palm fruits contain very high levels of antioxidants and several medicinal properties that may be useful for improving the quality of life in AD patients. In this study, we investigated the effect of dietary supplementation of Omani date palm fruits on the memory, anxiety and learning skills along with inflammation in an AD mouse model containing the double Swedish APP mutation (APPsw/Tg2576). The experimental groups of APP-transgenic mice from the age of 4 months were fed custom-mix diets (pellets) containing 2% and 4% Date palm fruits. We assessed spatial memory and learning ability, psychomotor coordination, and anxiety-related behavior in Tg and wild-type mice at the age of 4-5 months and 18-19 months using the Morris water maze test, rota rod test, elevated plus maze test, and open field test. Further, inflammatory parameters also analyzed. APPsw/Tg2576 mice that were fed a standard chow diet without dates showed significant memory deficits, increased anxiety-related behavior, and severe impairment in spatial learning ability, position discrimination learning ability and motor coordination along with increased inflammation compared to the wild type mice on the same diet, at the age of 18-19 months In contrast, PPsw/Tg2576 mice that were fed a diet containing 2% and 4% dates showed a significant improvements in memory, learning, locomotor function, and anxiety with reduced inflammatory markers compared to APPsw/Tg2576 mice fed the standard chow diet. Our results suggest that dietary supplementation with dates may slow the progression of cognitive and behavioral impairments in AD. The exact mechanism is still unclear and further extensive research needed.

Keywords: Alzheimer's disease, date palm fruits, Oman, cognitive decline, memory loss, anxiety, inflammation

Procedia PDF Downloads 422
586 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers

Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha

Abstract:

Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.

Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer

Procedia PDF Downloads 165
585 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 140
584 The Determination of Pb and Zn Phytoremediation Potential and Effect of Interaction between Cadmium and Zinc on Metabolism of Buckwheat (Fagopyrum Esculentum)

Authors: Nurdan Olguncelik Kaplan, Aysen Akay

Abstract:

Nowadays soil pollution has become a global problem. External added polluters to the soil are destroying and changing the structure of the soil and the problems are becoming more complex and in this sense the correction of these problems is going to be harder and more costly. Cadmium has got a fast mobility in the soil and plant system because of that cadmium can interfere very easily to the human and animal food chain and in the same time this can be very dangerous. The cadmium which is absorbed and stored by the plants is causing to many metabolic changes of the plants like; protein synthesis, nitrogen and carbohydrate metabolism, enzyme (nitrate reductase) activation, photo and chlorophyll synthesis. The biological function of cadmium is not known over the plants and it is not a necessary element. The plant is generally taking in small amounts the cadmium and this element is competing with the zinc. Cadmium is causing root damages. Buckwheat (Fagopyrum esculentum) is an important nutraceutical because of its high content of flavonoids, minerals and vitamins, and their nutritionally balanced amino-acid composition. Buckwheat has relatively high biomass productivity, is adapted to many areas of the world, and can flourish in sterile fields; therefore buckwheat plants are widely used for the phytoremediation process.The aim of this study were to evaluate the phytoremediation capacity of the high-yielding plant Buckwheat (Fagopyrum esculentum) in soils contaminated with Cd and Zn. The soils were applied to differrent doses cd(0-12.5-25-50-100 mg Cd kg−1 soil in the form of 3CdSO4.8H2O ) and Zn (0-10-30 mg Zn kg−1 soil in the form of ZnSO4.7H2O) and incubated about 60 days. Later buckwheat seeds were sown and grown for three mounth under greenhouse conditions. The test plants were irrigated by using pure water after the planting process. Buckwheat seeds (Gunes and Aktas species) were taken from Bahri Dagdas International Agricultural Research. After harvest, Cd and Zn concentrations of plant biomass and grain, yield and translocation factors (TFs) for Cd and Cd were determined. Cadmium accumulation in biomass and grain significantly increased in dose-dependent manner. Long term field trials are required to further investigate the potential of buckwheat to reclaimed the soil. But this could be undertaken in conjunction with actual remediation schemes. However, the differences in element accumulation among the genotypes were affected more by the properties of genotypes than by the soil properties. Gunes genotype accumulated higher lead than Aktas genotypes.

Keywords: buckwheat, cadmium, phytoremediation, zinc

Procedia PDF Downloads 416
583 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects

Authors: Muhammad Khairi bin Sulaiman

Abstract:

The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.

Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning

Procedia PDF Downloads 83
582 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 56
581 An Investigation into the Use of an Atomistic, Hermeneutic, Holistic Approach in Education Relating to the Architectural Design Process

Authors: N. Pritchard

Abstract:

Within architectural education, students arrive fore-armed with; their life-experience; knowledge gained from subject-based learning; their brains and more specifically their imaginations. The learning-by-doing that they embark on in studio-based/project-based learning calls for supervision that allows the student to proactively undertake research and experimentation with design solution possibilities. The degree to which this supervision includes direction is subject to debate and differing opinion. It can be argued that if the student is to learn-by-doing, then design decision making within the design process needs to be instigated and owned by the student so that they have the ability to personally reflect on and evaluate those decisions. Within this premise lies the problem that the student's endeavours can become unstructured and unfocused as they work their way into a new and complex activity. A resultant weakness can be that the design activity is compartmented and not holistic or comprehensive, and therefore, the student's reflections are consequently impoverished in terms of providing a positive, informative feedback loop. The construct proffered in this paper is that a supportive 'armature' or 'Heuristic-Framework' can be developed that facilitates a holistic approach and reflective learning. The normal explorations of architectural design comprise: Analysing the site and context, reviewing building precedents, assimilating the briefing information. However, the student can still be compromised by 'not knowing what they need to know'. The long-serving triad 'Firmness, Commodity and Delight' provides a broad-brush framework of considerations to explore and integrate into good design. If this were further atomised in subdivision formed from the disparate aspects of architectural design that need to be considered within the design process, then the student could sieve through the facts more methodically and reflectively in terms of considering their interrelationship conflict and alliances. The words facts and sieve hold the acronym of the aspects that form the Heuristic-Framework: Function, Aesthetics, Context, Tectonics, Spatial, Servicing, Infrastructure, Environmental, Value and Ecological issues. The Heuristic could be used as a Hermeneutic Model with each aspect of design being focused on and considered in abstraction and then considered in its relation to other aspect and the design proposal as a whole. Importantly, the heuristic could be used as a method for gathering information and enhancing the design brief. The more poetic, mysterious, intuitive, unconscious processes should still be able to occur for the student. The Heuristic-Framework should not be seen as comprehensive prescriptive formulaic or inhibiting to the wide exploration of possibilities and solutions within the architectural design process.

Keywords: atomistic, hermeneutic, holistic, approach architectural design studio education

Procedia PDF Downloads 257
580 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 25
579 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36

Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki

Abstract:

Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.

Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance

Procedia PDF Downloads 223
578 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 71
577 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 232
576 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 154
575 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France

Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou

Abstract:

Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.

Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport

Procedia PDF Downloads 402
574 Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective

Authors: Inas Hussein

Abstract:

Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to  communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6th–18th December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective.

Keywords: Multimodal Discourse Analysis (MDA), multimodal text, political cartoons, visual modality

Procedia PDF Downloads 239
573 The Effect of Physical Guidance on Learning a Tracking Task in Children with Cerebral Palsy

Authors: Elham Azimzadeh, Hamidollah Hassanlouei, Hadi Nobari, Georgian Badicu, Jorge Pérez-Gómez, Luca Paolo Ardigò

Abstract:

Children with cerebral palsy (CP) have weak physical abilities and their limitations may have an effect on performing everyday motor activities. One of the most important and common debilitating factors in CP is the malfunction in the upper extremities to perform motor skills and there is strong evidence that task-specific training may lead to improve general upper limb function among this population. However, augmented feedback enhances the acquisition and learning of a motor task. Practice conditions may alter the difficulty, e.g., the reduced frequency of PG could be more challenging for this population to learn a motor task. So, the purpose of this study was to investigate the effect of physical guidance (PG) on learning a tracking task in children with cerebral palsy (CP). Twenty-five independently ambulant children with spastic hemiplegic CP aged 7-15 years were assigned randomly to five groups. After the pre-test, experimental groups participated in an intervention for eight sessions, 12 trials during each session. The 0% PG group received no PG; the 25% PG group received PG for three trials; the 50% PG group received PG for six trials; the 75% PG group received PG for nine trials; and the 100% PG group, received PG for all 12 trials. PG consisted of placing the experimenter's hand around the children's hand, guiding them to stay on track and complete the task. Learning was inferred by acquisition and delayed retention tests. The tests involved two blocks of 12 trials of the tracking task without any PG being performed by all participants. They were asked to make the movement as accurate as possible (i.e., fewer errors) and the number of total touches (errors) in 24 trials was calculated as the scores of the tests. The results showed that the higher frequency of PG led to more accurate performance during the practice phase. However, the group that received 75% PG had significantly better performance compared to the other groups in the retention phase. It is concluded that the optimal frequency of PG played a critical role in learning a tracking task in children with CP and likely this population may benefit from an optimal level of PG to get the appropriate amount of information confirming the challenge point framework (CPF), which state that too much or too little information will retard learning a motor skill. Therefore, an optimum level of PG may help these children to identify appropriate patterns of motor skill using extrinsic information they receive through PG and improve learning by activating the intrinsic feedback mechanisms.

Keywords: cerebral palsy, challenge point framework, motor learning, physical guidance, tracking task

Procedia PDF Downloads 67
572 Botulinum Toxin a in the Treatment of Late Facial Nerve Palsy Complications

Authors: Akulov M. A., Orlova O. R., Zaharov V. O., Tomskij A. A.

Abstract:

Introduction: One of the common postoperative complications of posterior cranial fossa (PCF) and cerebello-pontine angle tumor treatment is a facial nerve palsy, which leads to multiple and resistant to treatment impairments of mimic muscles structure and functions. After 4-6 months after facial nerve palsy with insufficient therapeutic intervention patients develop a postparalythic syndrome, which includes such symptoms as mimic muscle insufficiency, mimic muscle contractures, synkinesis and spontaneous muscular twitching. A novel method of treatment is the use of a recent local neuromuscular blocking agent– botulinum toxin A (BTA). Experience of BTA treatment enables an assumption that it can be successfully used in late facial nerve palsy complications to significantly increase quality of life of patients. Study aim. To evaluate the efficacy of botulinum toxin A (BTA) (Xeomin) treatment in patients with late facial nerve palsy complications. Patients and Methods: 31 patients aged 27-59 years 6 months after facial nerve palsy development were evaluated. All patients received conventional treatment, including massage, movement therapy etc. Facial nerve palsy developed after acoustic nerve tumor resection in 23 (74,2%) patients, petroclival meningioma resection – in 8 (25,8%) patients. The first group included 17 (54,8%) patients, receiving BT-therapy; the second group – 14 (45,2%) patients continuing conventional treatment. BT-injections were performed in synkinesis or contracture points 1-2 U on injured site and 2-4 U on healthy side (for symmetry). Facial nerve function was evaluated on 2 and 4 months of therapy according to House-Brackman scale. Pain syndrome alleviation was assessed on VAS. Results: At baseline all patients in the first and second groups demonstrated аpostparalytic syndrome. We observed a significant improvement in patients receiving BTA after only one month of treatment. Mean VAS score at baseline was 80,4±18,7 and 77,9±18,2 in the first and second group, respectively. In the first group after one month of treatment we observed a significant decrease of pain syndrome – mean VAS score was 44,7±10,2 (р<0,01), whereas in the second group VAS score was as high as 61,8±9,4 points (p>0,05). By the 3d month of treatment pain syndrome intensity continued to decrease in both groups, but, the first group demonstrated significantly better results; mean score was 8,2±3,1 and 31,8±4,6 in the first and second group, respectively (р<0,01). Total House-Brackman score at baseline was 3,67±0,16 in the first group and 3,74±0,19 in the second group. Treatment resulted in a significant symptom improvement in the first group, with no improvement in the second group. After 4 months of treatment House-Brockman score in the first group was 3,1-fold lower, than in the second group (р<0,05). Conclusion: Botulinum toxin injections decrease postparalytic syndrome symptoms in patients with facial nerve palsy.

Keywords: botulinum toxin, facial nerve palsy, postparalytic syndrome, synkinesis

Procedia PDF Downloads 295
571 Spatial Variability of Renieramycin-M Production in the Philippine Blue Sponge, Xestospongia Sp.

Authors: Geminne Manzano, Porfirio Aliño, Clairecynth Yu, Lilibeth Salvador-Reyes, Viviene Santiago

Abstract:

Many marine benthic organisms produce secondary metabolites that serve as ecological roles to different biological and environmental factors. The secondary metabolites found in these organisms like algae, sponges, tunicates and worms exhibit variation at different scales. Understanding the chemical variation can be essential in deriving the evolutionary and ecological function of the secondary metabolites that may explain their patterns. Ecological surveys were performed on two collection sites representing from two Philippine marine biogeographic regions – in Oriental Mindoro located on the West Philippine Sea (WPS) and in Zamboanga del Sur located at Celebes Sea (CS), where a total of 39 Xestospongia sp. sponges were collected using SCUBA. The sponge samples were transported to the laboratory for taxonomic identification and chemical analysis. Biological and environmental factors were investigated to determine their relation to the abundance and distribution patterns and its spatial variability of their secondary metabolite production. Extracts were subjected to thin-layer chromatography and anti-proliferative assays to confirm the presence of Renieramycin-M and to test its cytotoxicity. The blue sponges were found to be more abundant on the WPS than in CS. Both the benthic community and the fish community in Oriental Mindoro, WPS and Zamboanga del Sur, CS sites are characterized by high species diversity and abundance and a very high biomass category. Environmental factors like depth and monsoonal exposure were also compared showing that wave exposure and depth are associated with the abundance and distribution of the sponges. Renieramycin-M presence using the TLC profiles between the sponge extracts from WPS and from CS showed differences in the Reniermycin-M presence and the presence of other functional groups were observed between the two sites. In terms of bioactivity, different responses were also exhibited by the sponge extracts coming from the different region. Different responses were also noted on its bioactivity depending on the cell lines tested. Exploring the influence of ecological parameters on the chemical variation can provide deeper chemical ecological insights in the knowledge and their potential varied applications at different scales. The results of this study provide further impetus in pursuing studies into patterns and processes of the chemical diversity of the Philippine blue sponge, Xestospongia sp. and the chemical ecological significance of the coral triangle.

Keywords: chemical ecology, porifera, renieramycin-m, spatial variability, Xestospongia sp.

Procedia PDF Downloads 210
570 Phytochemical Composition and Biological Activities of the Vegetal Extracts of Six Aromatic and Medicinal Plants of Algerian Flora and Their Uses in Food and Pharmaceutical Industries

Authors: Ziani Borhane Eddine Cherif, Hazzi Mohamed, Mouhouche Fazia

Abstract:

The vegetal extracts of aromatic and medicinal plants start to have much of interest like potential sources of natural bioactive molecules. Many features are conferred by the nature of the chemical function of their major constituents (phenol, alcohol, aldehyde, cetone). This biopotential lets us to focalize on the study of three main biological activities, the antioxidant, antibiotic and insecticidal activities of six Algerian aromatic plants in the aim of making in evidence by the chromatographic analysis (CPG and CG/SM) the phytochemical compounds implicating in this effects. The contents of Oxygenated monoterpenes represented the most prominent group of constituents in the majority of plants. However, the α-Terpineol (28,3%), Carvacrol (47,3%), pulégone (39,5%), Chrysanthenone (27,4%), Thymol 23,9%, γ-Terpinene 23,9% and 2-Undecanone(94%) were the main components. The antioxyding activity of the Essential oils and no-volatils extracts was evaluated in vitro using four tests: inhibition of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) and the 2,2-Azino-bis (3-ethylbenzthiazoline-6-sulphonic acid) radical-scavenging activity (ABTS•+), the thiobarbituric acid reactive substances (TBARS) assays and the reducing power. The measures of the IC50 of these natural compounds revealed potent activity (between 254.64-462.76mg.l-1), almost similar to that of BHT, BHA, Tocopherol and Ascorbic acid (126,4-369,1 mg.l-1) and so far than the Trolox one (IC50= 2,82mg.l-1). Furthermore, three ethanol extracts were found to be remarkably effective toward DPPH and ABTS inhibition, compared to chemical antioxidant BHA and BHT (IC = 9.8±0.1 and 28±0.7 mg.l-1, respectively); for reducing power test it has also exhibited high activity. The study on the insecticidal activity effect by contact, inhalation, fecundity and fertility of Callosobruchus maculatus and Tribolium confusum showed a strong potential biocide reaching 95-100% mortality only after 24 hours. The antibiotic activity of our essential oils were evaluated by a qualitative study (aromatogramme) and quantitative (MIC, MBC and CML) on four bacteria (Gram+ and Gram-) and one strain of pathogenic yeast, the results of these tests showed very interesting action than that induced by the same reference antibiotics (Gentamycin, and Nystatin Ceftatidine) such that the inhibition diameters and MIC values for tested microorganisms were in the range of 23–58 mm and 0.015–0.25%(v/v) respectively.

Keywords: aromatic plants, essential oils, no-volatils extracts, bioactive molecules, antioxidant activity, insecticidal activity, antibiotic activity

Procedia PDF Downloads 219
569 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 64
568 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
567 A Seven Year Single-Centre Study of Dental Implant Survival in Head and Neck Oncology Patients

Authors: Sidra Suleman, Maliha Suleman, Stephen Brindley

Abstract:

Oral rehabilitation of head and neck cancer patients plays a crucial role in the quality of life for such individuals post-treatment. Placement of dental implants or implant-retained prostheses can help restore oral function and aesthetics, which is often compromised following surgery. Conventional prosthodontic techniques can be insufficient in rehabilitating such patients due to their altered anatomy and reduced oral competence. Hence, there is a strong clinical need for the placement of dental implants. With an increasing incidence of head and neck cancer patients, the demand for such treatment is rising. Aim: The aim of the study was to determine the survival rate of dental implants in head and neck cancer patients placed at the Restorative and Maxillofacial Department, Royal Stoke University Hospital (RSUH), United Kingdom. Methodology: All patients who received dental implants between January 1, 2013 to December 31, 2020 were identified. Patients were excluded based on three criteria: 1) non-head and neck cancer patients, 2) no outpatient follow-up post-implant placement 3) provision of non-dental implants. Scanned paper notes and electronic records were extracted and analyzed. Implant survival was defined as fixtures that had remained in-situ / not required removal. Sample: Overall, 61 individuals were recruited from the 143 patients identified. The mean age was 64.9 years, with a range of 35 – 89 years. The sample included 37 (60.7%) males and 24 (39.3%) females. In total, 211 implants were placed, of which 40 (19.0%) were in the maxilla, 152 (72.0%) in the mandible and 19 (9.0%) in autogenous bone graft sites. Histologically 57 (93.4%) patients had squamous cell carcinoma, with 43 (70.5%) patients having either stage IVA or IVB disease. As part of treatment, 42 (68.9%) patients received radiotherapy, which was carried out post-operatively for 29 (69.0%) cases. Whereas 21 (34.4%) patients underwent chemotherapy, 13 (61.9%) of which were post-operative. The Median follow-up period was 21.9 months with a range from 0.9 – 91.4 months. During the study, 23 (37.7%) patients died and their data was censored beyond the date of death. Results: In total, four patients who had received radiotherapy had one implant failure each. Two mandibular implants failed secondary to osteoradionecrosis, and two maxillary implants did not survive as a result of failure to osseointegrate. The overall implant survival rates were 99.1% at three years and 98.1% at both 5 and 7 years. Conclusions: Although this data shows that implant failure rates are low, it highlights the difficulty in predicting which patients will be affected. Future studies involving larger cohorts are warranted to further analyze factors affecting outcomes.

Keywords: oncology, dental implants, survival, restorative

Procedia PDF Downloads 231
566 Occurrence of Half-Metallicity by Sb-Substitution in Non-Magnetic Fe₂TiSn

Authors: S. Chaudhuri, P. A. Bhobe

Abstract:

Fe₂TiSn is a non-magnetic full Heusler alloy with a small gap (~ 0.07 eV) at the Fermi level. The electronic structure is highly symmetric in both the spin bands and a small percentage of substitution of holes or electrons can push the system towards spin polarization. A stable 100% spin polarization or half-metallicity is very desirable in the field of spintronics, making Fe₂TiSn a highly attractive material. However, this composition suffers from an inherent anti-site disorder between Fe and Ti sites. This paper reports on the method adopted to control the anti-site disorder and the realization of the half-metallic ground state in Fe₂TiSn, achieved by chemical substitution. Here, Sb was substituted at Sn site to obtain Fe₂TiSn₁₋ₓSbₓ compositions with x = 0, 0.1, 0.25, 0.5 and 0.6. All prepared compositions with x ≤ 0.6 exhibit long-range L2₁ ordering and a decrease in Fe – Ti anti-site disorder. The transport and magnetic properties of Fe₂TiSn₁₋ₓSbₓ compositions were investigated as a function of temperature in the range, 5 K to 400 K. Electrical resistivity, magnetization, and Hall voltage measurements were carried out. All the experimental results indicate the presence of the half-metallic ground state in x ≥ 0.25 compositions. However, the value of saturation magnetization is small, indicating the presence of compensated magnetic moments. The observed magnetic moments' values are in close agreement with the Slater–Pauling rule in half-metallic systems. Magnetic interactions in Fe₂TiSn₁₋ₓSbₓ are understood from the local crystal structural perspective using extended X-ray absorption fine structure (EXAFS) spectroscopy. The changes in bond distances extracted from EXAFS analysis can be correlated with the hybridization between constituent atoms and hence the RKKY type magnetic interactions that govern the magnetic ground state of these alloys. To complement the experimental findings, first principle electronic structure calculations were also undertaken. The spin-polarized DOS complies with the experimental results for Fe₂TiSn₁₋ₓSbₓ. Substitution of Sb (an electron excess element) at Sn–site shifts the majority spin band to the lower energy side of Fermi level, thus making the system 100% spin polarized and inducing long-range magnetic order in an otherwise non-magnetic Fe₂TiSn. The present study concludes that a stable half-metallic system can be realized in Fe₂TiSn with ≥ 50% Sb – substitution at Sn – site.

Keywords: antisite disorder, EXAFS, Full Heusler alloy, half metallic ferrimagnetism, RKKY interactions

Procedia PDF Downloads 136
565 Ultra-deformable Drug-free Sequessome™ Vesicles (TDT 064) for the Treatment of Joint Pain Following Exercise: A Case Report and Clinical Data

Authors: Joe Collins, Matthias Rother

Abstract:

Background: Oral non-steroidal anti-inflammatory drugs (NSAIDs) are widely used for the relief of joint pain during and post-exercise. However, oral NSAIDs increase the risk of systemic side effects, even in healthy individuals, and retard recovery from muscle soreness. TDT 064 (Flexiseq®), a topical formulation containing ultra-deformable drug-free Sequessome™ vesicles, has demonstrated equivalent efficacy to oral celecoxib in reducing osteoarthritis-associated joint pain and stiffness. TDT 064 does not cause NSAID-related adverse effects. We describe clinical study data and a case report on the effectiveness of TDT 064 in reducing joint pain after exercise. Methods: Participants with a pain score ≥3 (10-point scale) 12–16 hours post-exercise were randomized to receive TDT 064 plus oral placebo, TDT 064 plus oral ketoprofen, or ketoprofen in ultra-deformable phospholipid vesicles plus oral placebo. Results: In the 168 study participants, pain scores were significantly higher with oral ketoprofen plus TDT 064 than with TDT 064 plus placebo in the 7 days post-exercise (P = 0.0240) and recovery from muscle soreness was significantly longer (P = 0.0262). There was a low incidence of adverse events. These data are supported by clinical experience. A 24-year-old male professional rugby player suffered a traumatic lisfranc fracture in March 2014 and underwent operative reconstruction. He had no relevant medical history and was not receiving concomitant medications. He had undergone anterior cruciate ligament reconstruction in 2008. The patient reported restricted training due to pain (score 7/10), stiffness (score 9/10) and poor function, as well as pain when changing direction and running on consecutive days. In July 2014 he started using TDT 064 twice daily at the recommended dose. In November 2014 he noted reduced pain on running (score 2-3/10), decreased morning stiffness (score 4/10) and improved joint mobility and was able to return to competitive rugby without restrictions. No side effects of TDT 064 were reported. Conclusions: TDT 064 shows efficacy against exercise- and injury-induced joint pain, as well as that associated with osteoarthritis. It does not retard muscle soreness recovery after exercise compared with an oral NSAID, making it an alternative approach for the treatment of joint pain during and post-exercise.

Keywords: exercise, joint pain, TDT 064, phospholipid vesicles

Procedia PDF Downloads 479
564 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions

Authors: Ali Al Moussaoui

Abstract:

An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.

Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition

Procedia PDF Downloads 183
563 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 169