Search results for: festival function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5021

Search results for: festival function

581 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 75
580 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 233
579 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 156
578 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France

Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou

Abstract:

Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.

Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport

Procedia PDF Downloads 404
577 Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective

Authors: Inas Hussein

Abstract:

Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to  communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6th–18th December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective.

Keywords: Multimodal Discourse Analysis (MDA), multimodal text, political cartoons, visual modality

Procedia PDF Downloads 241
576 The Effect of Physical Guidance on Learning a Tracking Task in Children with Cerebral Palsy

Authors: Elham Azimzadeh, Hamidollah Hassanlouei, Hadi Nobari, Georgian Badicu, Jorge Pérez-Gómez, Luca Paolo Ardigò

Abstract:

Children with cerebral palsy (CP) have weak physical abilities and their limitations may have an effect on performing everyday motor activities. One of the most important and common debilitating factors in CP is the malfunction in the upper extremities to perform motor skills and there is strong evidence that task-specific training may lead to improve general upper limb function among this population. However, augmented feedback enhances the acquisition and learning of a motor task. Practice conditions may alter the difficulty, e.g., the reduced frequency of PG could be more challenging for this population to learn a motor task. So, the purpose of this study was to investigate the effect of physical guidance (PG) on learning a tracking task in children with cerebral palsy (CP). Twenty-five independently ambulant children with spastic hemiplegic CP aged 7-15 years were assigned randomly to five groups. After the pre-test, experimental groups participated in an intervention for eight sessions, 12 trials during each session. The 0% PG group received no PG; the 25% PG group received PG for three trials; the 50% PG group received PG for six trials; the 75% PG group received PG for nine trials; and the 100% PG group, received PG for all 12 trials. PG consisted of placing the experimenter's hand around the children's hand, guiding them to stay on track and complete the task. Learning was inferred by acquisition and delayed retention tests. The tests involved two blocks of 12 trials of the tracking task without any PG being performed by all participants. They were asked to make the movement as accurate as possible (i.e., fewer errors) and the number of total touches (errors) in 24 trials was calculated as the scores of the tests. The results showed that the higher frequency of PG led to more accurate performance during the practice phase. However, the group that received 75% PG had significantly better performance compared to the other groups in the retention phase. It is concluded that the optimal frequency of PG played a critical role in learning a tracking task in children with CP and likely this population may benefit from an optimal level of PG to get the appropriate amount of information confirming the challenge point framework (CPF), which state that too much or too little information will retard learning a motor skill. Therefore, an optimum level of PG may help these children to identify appropriate patterns of motor skill using extrinsic information they receive through PG and improve learning by activating the intrinsic feedback mechanisms.

Keywords: cerebral palsy, challenge point framework, motor learning, physical guidance, tracking task

Procedia PDF Downloads 72
575 Botulinum Toxin a in the Treatment of Late Facial Nerve Palsy Complications

Authors: Akulov M. A., Orlova O. R., Zaharov V. O., Tomskij A. A.

Abstract:

Introduction: One of the common postoperative complications of posterior cranial fossa (PCF) and cerebello-pontine angle tumor treatment is a facial nerve palsy, which leads to multiple and resistant to treatment impairments of mimic muscles structure and functions. After 4-6 months after facial nerve palsy with insufficient therapeutic intervention patients develop a postparalythic syndrome, which includes such symptoms as mimic muscle insufficiency, mimic muscle contractures, synkinesis and spontaneous muscular twitching. A novel method of treatment is the use of a recent local neuromuscular blocking agent– botulinum toxin A (BTA). Experience of BTA treatment enables an assumption that it can be successfully used in late facial nerve palsy complications to significantly increase quality of life of patients. Study aim. To evaluate the efficacy of botulinum toxin A (BTA) (Xeomin) treatment in patients with late facial nerve palsy complications. Patients and Methods: 31 patients aged 27-59 years 6 months after facial nerve palsy development were evaluated. All patients received conventional treatment, including massage, movement therapy etc. Facial nerve palsy developed after acoustic nerve tumor resection in 23 (74,2%) patients, petroclival meningioma resection – in 8 (25,8%) patients. The first group included 17 (54,8%) patients, receiving BT-therapy; the second group – 14 (45,2%) patients continuing conventional treatment. BT-injections were performed in synkinesis or contracture points 1-2 U on injured site and 2-4 U on healthy side (for symmetry). Facial nerve function was evaluated on 2 and 4 months of therapy according to House-Brackman scale. Pain syndrome alleviation was assessed on VAS. Results: At baseline all patients in the first and second groups demonstrated аpostparalytic syndrome. We observed a significant improvement in patients receiving BTA after only one month of treatment. Mean VAS score at baseline was 80,4±18,7 and 77,9±18,2 in the first and second group, respectively. In the first group after one month of treatment we observed a significant decrease of pain syndrome – mean VAS score was 44,7±10,2 (р<0,01), whereas in the second group VAS score was as high as 61,8±9,4 points (p>0,05). By the 3d month of treatment pain syndrome intensity continued to decrease in both groups, but, the first group demonstrated significantly better results; mean score was 8,2±3,1 and 31,8±4,6 in the first and second group, respectively (р<0,01). Total House-Brackman score at baseline was 3,67±0,16 in the first group and 3,74±0,19 in the second group. Treatment resulted in a significant symptom improvement in the first group, with no improvement in the second group. After 4 months of treatment House-Brockman score in the first group was 3,1-fold lower, than in the second group (р<0,05). Conclusion: Botulinum toxin injections decrease postparalytic syndrome symptoms in patients with facial nerve palsy.

Keywords: botulinum toxin, facial nerve palsy, postparalytic syndrome, synkinesis

Procedia PDF Downloads 298
574 Spatial Variability of Renieramycin-M Production in the Philippine Blue Sponge, Xestospongia Sp.

Authors: Geminne Manzano, Porfirio Aliño, Clairecynth Yu, Lilibeth Salvador-Reyes, Viviene Santiago

Abstract:

Many marine benthic organisms produce secondary metabolites that serve as ecological roles to different biological and environmental factors. The secondary metabolites found in these organisms like algae, sponges, tunicates and worms exhibit variation at different scales. Understanding the chemical variation can be essential in deriving the evolutionary and ecological function of the secondary metabolites that may explain their patterns. Ecological surveys were performed on two collection sites representing from two Philippine marine biogeographic regions – in Oriental Mindoro located on the West Philippine Sea (WPS) and in Zamboanga del Sur located at Celebes Sea (CS), where a total of 39 Xestospongia sp. sponges were collected using SCUBA. The sponge samples were transported to the laboratory for taxonomic identification and chemical analysis. Biological and environmental factors were investigated to determine their relation to the abundance and distribution patterns and its spatial variability of their secondary metabolite production. Extracts were subjected to thin-layer chromatography and anti-proliferative assays to confirm the presence of Renieramycin-M and to test its cytotoxicity. The blue sponges were found to be more abundant on the WPS than in CS. Both the benthic community and the fish community in Oriental Mindoro, WPS and Zamboanga del Sur, CS sites are characterized by high species diversity and abundance and a very high biomass category. Environmental factors like depth and monsoonal exposure were also compared showing that wave exposure and depth are associated with the abundance and distribution of the sponges. Renieramycin-M presence using the TLC profiles between the sponge extracts from WPS and from CS showed differences in the Reniermycin-M presence and the presence of other functional groups were observed between the two sites. In terms of bioactivity, different responses were also exhibited by the sponge extracts coming from the different region. Different responses were also noted on its bioactivity depending on the cell lines tested. Exploring the influence of ecological parameters on the chemical variation can provide deeper chemical ecological insights in the knowledge and their potential varied applications at different scales. The results of this study provide further impetus in pursuing studies into patterns and processes of the chemical diversity of the Philippine blue sponge, Xestospongia sp. and the chemical ecological significance of the coral triangle.

Keywords: chemical ecology, porifera, renieramycin-m, spatial variability, Xestospongia sp.

Procedia PDF Downloads 213
573 Phytochemical Composition and Biological Activities of the Vegetal Extracts of Six Aromatic and Medicinal Plants of Algerian Flora and Their Uses in Food and Pharmaceutical Industries

Authors: Ziani Borhane Eddine Cherif, Hazzi Mohamed, Mouhouche Fazia

Abstract:

The vegetal extracts of aromatic and medicinal plants start to have much of interest like potential sources of natural bioactive molecules. Many features are conferred by the nature of the chemical function of their major constituents (phenol, alcohol, aldehyde, cetone). This biopotential lets us to focalize on the study of three main biological activities, the antioxidant, antibiotic and insecticidal activities of six Algerian aromatic plants in the aim of making in evidence by the chromatographic analysis (CPG and CG/SM) the phytochemical compounds implicating in this effects. The contents of Oxygenated monoterpenes represented the most prominent group of constituents in the majority of plants. However, the α-Terpineol (28,3%), Carvacrol (47,3%), pulégone (39,5%), Chrysanthenone (27,4%), Thymol 23,9%, γ-Terpinene 23,9% and 2-Undecanone(94%) were the main components. The antioxyding activity of the Essential oils and no-volatils extracts was evaluated in vitro using four tests: inhibition of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) and the 2,2-Azino-bis (3-ethylbenzthiazoline-6-sulphonic acid) radical-scavenging activity (ABTS•+), the thiobarbituric acid reactive substances (TBARS) assays and the reducing power. The measures of the IC50 of these natural compounds revealed potent activity (between 254.64-462.76mg.l-1), almost similar to that of BHT, BHA, Tocopherol and Ascorbic acid (126,4-369,1 mg.l-1) and so far than the Trolox one (IC50= 2,82mg.l-1). Furthermore, three ethanol extracts were found to be remarkably effective toward DPPH and ABTS inhibition, compared to chemical antioxidant BHA and BHT (IC = 9.8±0.1 and 28±0.7 mg.l-1, respectively); for reducing power test it has also exhibited high activity. The study on the insecticidal activity effect by contact, inhalation, fecundity and fertility of Callosobruchus maculatus and Tribolium confusum showed a strong potential biocide reaching 95-100% mortality only after 24 hours. The antibiotic activity of our essential oils were evaluated by a qualitative study (aromatogramme) and quantitative (MIC, MBC and CML) on four bacteria (Gram+ and Gram-) and one strain of pathogenic yeast, the results of these tests showed very interesting action than that induced by the same reference antibiotics (Gentamycin, and Nystatin Ceftatidine) such that the inhibition diameters and MIC values for tested microorganisms were in the range of 23–58 mm and 0.015–0.25%(v/v) respectively.

Keywords: aromatic plants, essential oils, no-volatils extracts, bioactive molecules, antioxidant activity, insecticidal activity, antibiotic activity

Procedia PDF Downloads 221
572 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 66
571 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 151
570 A Seven Year Single-Centre Study of Dental Implant Survival in Head and Neck Oncology Patients

Authors: Sidra Suleman, Maliha Suleman, Stephen Brindley

Abstract:

Oral rehabilitation of head and neck cancer patients plays a crucial role in the quality of life for such individuals post-treatment. Placement of dental implants or implant-retained prostheses can help restore oral function and aesthetics, which is often compromised following surgery. Conventional prosthodontic techniques can be insufficient in rehabilitating such patients due to their altered anatomy and reduced oral competence. Hence, there is a strong clinical need for the placement of dental implants. With an increasing incidence of head and neck cancer patients, the demand for such treatment is rising. Aim: The aim of the study was to determine the survival rate of dental implants in head and neck cancer patients placed at the Restorative and Maxillofacial Department, Royal Stoke University Hospital (RSUH), United Kingdom. Methodology: All patients who received dental implants between January 1, 2013 to December 31, 2020 were identified. Patients were excluded based on three criteria: 1) non-head and neck cancer patients, 2) no outpatient follow-up post-implant placement 3) provision of non-dental implants. Scanned paper notes and electronic records were extracted and analyzed. Implant survival was defined as fixtures that had remained in-situ / not required removal. Sample: Overall, 61 individuals were recruited from the 143 patients identified. The mean age was 64.9 years, with a range of 35 – 89 years. The sample included 37 (60.7%) males and 24 (39.3%) females. In total, 211 implants were placed, of which 40 (19.0%) were in the maxilla, 152 (72.0%) in the mandible and 19 (9.0%) in autogenous bone graft sites. Histologically 57 (93.4%) patients had squamous cell carcinoma, with 43 (70.5%) patients having either stage IVA or IVB disease. As part of treatment, 42 (68.9%) patients received radiotherapy, which was carried out post-operatively for 29 (69.0%) cases. Whereas 21 (34.4%) patients underwent chemotherapy, 13 (61.9%) of which were post-operative. The Median follow-up period was 21.9 months with a range from 0.9 – 91.4 months. During the study, 23 (37.7%) patients died and their data was censored beyond the date of death. Results: In total, four patients who had received radiotherapy had one implant failure each. Two mandibular implants failed secondary to osteoradionecrosis, and two maxillary implants did not survive as a result of failure to osseointegrate. The overall implant survival rates were 99.1% at three years and 98.1% at both 5 and 7 years. Conclusions: Although this data shows that implant failure rates are low, it highlights the difficulty in predicting which patients will be affected. Future studies involving larger cohorts are warranted to further analyze factors affecting outcomes.

Keywords: oncology, dental implants, survival, restorative

Procedia PDF Downloads 234
569 Occurrence of Half-Metallicity by Sb-Substitution in Non-Magnetic Fe₂TiSn

Authors: S. Chaudhuri, P. A. Bhobe

Abstract:

Fe₂TiSn is a non-magnetic full Heusler alloy with a small gap (~ 0.07 eV) at the Fermi level. The electronic structure is highly symmetric in both the spin bands and a small percentage of substitution of holes or electrons can push the system towards spin polarization. A stable 100% spin polarization or half-metallicity is very desirable in the field of spintronics, making Fe₂TiSn a highly attractive material. However, this composition suffers from an inherent anti-site disorder between Fe and Ti sites. This paper reports on the method adopted to control the anti-site disorder and the realization of the half-metallic ground state in Fe₂TiSn, achieved by chemical substitution. Here, Sb was substituted at Sn site to obtain Fe₂TiSn₁₋ₓSbₓ compositions with x = 0, 0.1, 0.25, 0.5 and 0.6. All prepared compositions with x ≤ 0.6 exhibit long-range L2₁ ordering and a decrease in Fe – Ti anti-site disorder. The transport and magnetic properties of Fe₂TiSn₁₋ₓSbₓ compositions were investigated as a function of temperature in the range, 5 K to 400 K. Electrical resistivity, magnetization, and Hall voltage measurements were carried out. All the experimental results indicate the presence of the half-metallic ground state in x ≥ 0.25 compositions. However, the value of saturation magnetization is small, indicating the presence of compensated magnetic moments. The observed magnetic moments' values are in close agreement with the Slater–Pauling rule in half-metallic systems. Magnetic interactions in Fe₂TiSn₁₋ₓSbₓ are understood from the local crystal structural perspective using extended X-ray absorption fine structure (EXAFS) spectroscopy. The changes in bond distances extracted from EXAFS analysis can be correlated with the hybridization between constituent atoms and hence the RKKY type magnetic interactions that govern the magnetic ground state of these alloys. To complement the experimental findings, first principle electronic structure calculations were also undertaken. The spin-polarized DOS complies with the experimental results for Fe₂TiSn₁₋ₓSbₓ. Substitution of Sb (an electron excess element) at Sn–site shifts the majority spin band to the lower energy side of Fermi level, thus making the system 100% spin polarized and inducing long-range magnetic order in an otherwise non-magnetic Fe₂TiSn. The present study concludes that a stable half-metallic system can be realized in Fe₂TiSn with ≥ 50% Sb – substitution at Sn – site.

Keywords: antisite disorder, EXAFS, Full Heusler alloy, half metallic ferrimagnetism, RKKY interactions

Procedia PDF Downloads 139
568 Ultra-deformable Drug-free Sequessome™ Vesicles (TDT 064) for the Treatment of Joint Pain Following Exercise: A Case Report and Clinical Data

Authors: Joe Collins, Matthias Rother

Abstract:

Background: Oral non-steroidal anti-inflammatory drugs (NSAIDs) are widely used for the relief of joint pain during and post-exercise. However, oral NSAIDs increase the risk of systemic side effects, even in healthy individuals, and retard recovery from muscle soreness. TDT 064 (Flexiseq®), a topical formulation containing ultra-deformable drug-free Sequessome™ vesicles, has demonstrated equivalent efficacy to oral celecoxib in reducing osteoarthritis-associated joint pain and stiffness. TDT 064 does not cause NSAID-related adverse effects. We describe clinical study data and a case report on the effectiveness of TDT 064 in reducing joint pain after exercise. Methods: Participants with a pain score ≥3 (10-point scale) 12–16 hours post-exercise were randomized to receive TDT 064 plus oral placebo, TDT 064 plus oral ketoprofen, or ketoprofen in ultra-deformable phospholipid vesicles plus oral placebo. Results: In the 168 study participants, pain scores were significantly higher with oral ketoprofen plus TDT 064 than with TDT 064 plus placebo in the 7 days post-exercise (P = 0.0240) and recovery from muscle soreness was significantly longer (P = 0.0262). There was a low incidence of adverse events. These data are supported by clinical experience. A 24-year-old male professional rugby player suffered a traumatic lisfranc fracture in March 2014 and underwent operative reconstruction. He had no relevant medical history and was not receiving concomitant medications. He had undergone anterior cruciate ligament reconstruction in 2008. The patient reported restricted training due to pain (score 7/10), stiffness (score 9/10) and poor function, as well as pain when changing direction and running on consecutive days. In July 2014 he started using TDT 064 twice daily at the recommended dose. In November 2014 he noted reduced pain on running (score 2-3/10), decreased morning stiffness (score 4/10) and improved joint mobility and was able to return to competitive rugby without restrictions. No side effects of TDT 064 were reported. Conclusions: TDT 064 shows efficacy against exercise- and injury-induced joint pain, as well as that associated with osteoarthritis. It does not retard muscle soreness recovery after exercise compared with an oral NSAID, making it an alternative approach for the treatment of joint pain during and post-exercise.

Keywords: exercise, joint pain, TDT 064, phospholipid vesicles

Procedia PDF Downloads 480
567 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions

Authors: Ali Al Moussaoui

Abstract:

An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.

Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition

Procedia PDF Downloads 186
566 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 172
565 Effect of Low Calorie Sweeteners on Chemical, Sensory Evaluation and Antidiabetic of Pumpkin Jam Fortified with Soybean

Authors: Amnah M. A. Alsuhaibani, Amal N. Al-Kuraieef

Abstract:

Introduction: In the recent decades, production of low-calorie jams is needed for diabetics that comprise low calorie fruits and low calorie sweeteners. Object: the research aimed to prepare low calorie formulated pumpkin jams (fructose, stevia and aspartame) incorporated with soy bean and evaluate the jams through chemical analysis and sensory evaluation after storage for six month. Moreover, the possible effect of consumption of low calorie jams on diabetic rats was investigated. Methods: Five formulas of pumpkin jam with different sucrose, fructose, stevia and aspartame sweeteners and soy bean were prepared and stored at 10 oC for six month compared to ordinary pumpkin jam. Chemical composition and sensory evaluation of formulated jams were evaluated at zero time, 3 month and 6 month of storage. The best three acceptable pumpkin jams were taken for biological study on diabetic rats. Rats divided into group (1) served as negative control and streptozotocin induce diabetes four rat groups that were positive diabetic control (group2), rats fed on standard diet with 10% sucrose soybean jam, fructose soybean jam and stevia soybean jam (group 3, 4&5), respectively. Results: The content of protein, fat, ash and fiber were increased but carbohydrate was decreased in low calorie formulated pumpkin jams compared to ordinary jam. Production of aspartame soybean pumpkin jam had lower score of all sensory attributes compared to other jam then followed by stevia soybean Pumpkin jam. Using non nutritive sweeteners (stevia & aspartame) with soybean in processing jam could lower the score of the sensory attributes after storage for 3 and 6 months. The highest score was recorded for sucrose and fructose soybean jams followed by stevia soybean jam while aspartame soybean jam recorded the lowest score significantly. The biological evaluation showed a significant improvement in body weight and FER of rats after six weeks of consumption of standard diet with jams (Group 3,4&5) compared to Group1. Rats consumed 10% low calorie jam with nutrient sweetener (fructose) and non nutrient sweetener (stevia) soybean jam (group 4& 5) showed significant decrease in glucose level, liver function enzymes activity, and liver cholesterol & total lipids in addition of significant increase of insulin and glycogen compared to the levels of group 2. Conclusion: low calorie pumpkin jams can be prepared by low calorie sweeteners and soybean and also storage for 3 months at 10oC without change sensory attributes. Consumption of stevia pumpkin jam fortified with soybean had positive health effects on streptozoticin induced diabetes in rats.

Keywords: pumpkin jam, HFCS, aspartame, stevia, storage

Procedia PDF Downloads 184
564 Recognizing Juxtaposition Patterns of the Dwelling Units in Housing Cluster: The Case Study of Aghayan Complex: An Example of Rural Residential Development in Qajar Era in Iran

Authors: Outokesh Fatemeh, Jourabchi Keivan, Talebi Maryam, Nikbakht Fatemeh

Abstract:

Mayamei is a small town in Iran that is located between Shahrud and Sabzevar cities, on the Silk Road. It enjoys a history of approximately 1000 years. An alley entitled ‘Aghayan’ exists in this town that comprises residential buildings of a famous family. Bathhouse, mosque, telegraph center, cistern are all related to this alley. This architectural complex belongs to Sadat Mousavi, who is one of the Mayamei's major grandees and religious household. The alley after construction has been inherited from generation to generation within the family masters. The purpose of this study, which was conducted on Aghayan alley and its associated complex, was to elucidate Iranian vernacular domestic architecture of Qajar era in small towns and villages. We searched for large, medium, and small architectural patterns in the contemplated complex, and tried to elaborate their evolution from past to the present. The other objective of this project was finding a correlation between changes in the lifestyle of the alley’s inhabitants with the form of the building's architecture. Our investigation methods included: literature review especially in regard to historical travelogues, peer site visiting, mapping, interviewing of the elderly people of the Mousavi family (the owners), and examining the available documents especially the 4 meters’ scroll-type testament of 150 years ago. For the analysis of the aforementioned data, an effort was made to discover (1) the patterns of placing of different buildings in respect of the others, (2) finding the relation between function of the buildings with their relative location in the complex, as was considered in the original design, and (3) possible changes of functions of the buildings during the time. In such an investigation, special attention was paid to the chronological changes of lifestyles of the residents. In addition, we tried to take all different activities of the residents into account including their daily life activities, religious ceremonies, etc. By combining such methods, we were able to obtain a picture of the buildings in their original (construction) state, along with a knowledge of the temporal evolution of the architecture. An interesting finding is that the Aghayan complex seems to be a big structure of the horizontal type apartments, which are placed next to each other. The houses made in this way are connected to the adjacent neighbors both by the bifacial rooms and from the roofs.

Keywords: Iran, Qajar period, vernacular domestic architecture, life style, residential complex

Procedia PDF Downloads 164
563 Ensuring Sustainable Urban Mobility in Indian Cities: Need for Creating People Friendly Roadside Public Spaces

Authors: Pushplata Garg

Abstract:

Mobility, is an integral part of living and sustainability of urban mobility, is essential not only for, but also for addressing global warming and climate change. However, very little is understood about the obstacles/hurdles and likely challenges in the success of plans for sustainable urban mobility in Indian cities from the public perspective. Whereas some of the problems and issues are common to all cities, others vary considerably with financial status, function, the size of cities and culture of a place. Problems and issues similar in all cities relate to availability, efficiency and safety of public transport, last mile connectivity, universal accessibility, and essential planning and design requirements of pedestrians and cyclists are same. However, certain aspects like the type of means of public transportation, priority for cycling and walking, type of roadside activities, are influenced by the size of the town, average educational and income level of public, financial status of the local authorities, and culture of a place. The extent of public awareness, civic sense, maintenance of public spaces and law enforcement vary significantly from large metropolitan cities to small and medium towns in countries like India. Besides, design requirements for shading, location of public open spaces and sitting areas, street furniture, landscaping also vary depending on the climate of the place. Last mile connectivity plays a major role in success/ effectiveness of public transport system in a city. In addition to the provision of pedestrian footpaths connecting important destinations, sitting spaces and necessary amenities/facilities along footpaths; pedestrian movement to public transit stations is encouraged by the presence of quality roadside public spaces. It is not only the visual attractiveness of streetscape or landscape or the public open spaces along pedestrian movement channels but the activities along that make a street vibrant and attractive. These along with adequate spaces to rest and relax encourage people to walk as is observed in cities with successful public transportation systems. The paper discusses problems and issues of pedestrians for last mile connectivity in the context of Delhi, Chandigarh, Gurgaon, and Roorkee- four Indian cities representing varying urban contexts, that is, of metropolitan, large and small cities.

Keywords: pedestrianisation, roadside public spaces, last mile connectivity, sustainable urban mobility

Procedia PDF Downloads 253
562 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain

Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki

Abstract:

The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.

Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy

Procedia PDF Downloads 76
561 Determination of Activation Energy for Thermal Decomposition of Selected Soft Tissues Components

Authors: M. Ekiert, T. Uhl, A. Mlyniec

Abstract:

Tendons are the biological soft tissue structures composed of collagen, proteoglycan, glycoproteins, water and cells of extracellular matrix (ECM). Tendons, which primary function is to transfer force generated by the muscles to the bones causing joints movement, are exposed to many micro and macro damages. In fact, tendons and ligaments trauma are one of the most numerous injuries of human musculoskeletal system, causing for many people (particularly for athletes and physically active people), recurring disorders, chronic pain or even inability of movement. The number of tendons reconstruction and transplantation procedures is increasing every year. Therefore, studies on soft tissues storage conditions (influencing i.e. tissue aging) seem to be an extremely important issue. In this study, an atomic-scale investigation on the kinetics of decomposition of two selected tendon components – collagen type I (which forms a 60-85% of a tendon dry mass) and elastin protein (which combine with ECM creates elastic fibers of connective tissues) is presented. A molecular model of collagen and elastin was developed based on crystal structure of triple-helical collagen-like 1QSU peptide and P15502 human elastin protein, respectively. Each model employed 4 linear strands collagen/elastin strands per unit cell, distributed in 2x2 matrix arrangement, placed in simulation box filled with water molecules. A decomposition phenomena was simulated with molecular dynamics (MD) method using ReaxFF force field and periodic boundary conditions. A set of NVT-MD runs was performed for 1000K temperature range in order to obtained temperature-depended rate of production of decomposition by-products. Based on calculated reaction rates activation energies and pre-exponential factors, required to formulate Arrhenius equations describing kinetics of decomposition of tested soft tissue components, were calculated. Moreover, by adjusting a model developed for collagen, system scalability and correct implementation of the periodic boundary conditions were evaluated. An obtained results provide a deeper insight into decomposition of selected tendon components. A developed methodology may also be easily transferred to other connective tissue elements and therefore might be used for further studies on soft tissues aging.

Keywords: decomposition, molecular dynamics, soft tissue, tendons

Procedia PDF Downloads 210
560 Passing-On Cultural Heritage Knowledge: Entrepreneurial Approaches for a Higher Educational Sustainability

Authors: Ioana Simina Frincu

Abstract:

As institutional initiatives often fail to provide good practices when it comes to heritage management or to adapt to the changing environment in which they function and to the audiences they address, private actions represent viable strategies for sustainable knowledge acquisition. Information dissemination to future generations is one of the key aspects in preserving cultural heritage and is successfully feasible even in the absence of original artifacts. Combined with the (re)discovery of natural landscape, open-air exploratory approaches (archeoparks) versus an enclosed monodisciplinary rigid framework (traditional museums) are more likely to 'speak the language' of a larger number of people, belonging to a variety of categories, ages, and professions. Interactive sites are efficient ways of stimulating heritage awareness and increasing the number of visitors of non-interactive/static cultural institutions owning original pieces of history, delivering specialized information, and making continuous efforts to preserve historical evidence (relics, manuscripts, etc.). It is high time entrepreneurs took over the role of promoting cultural heritage, bet it under a more commercial yet more attractive form (business). Inclusive, participatory type of activities conceived by experts from different domains/fields (history, anthropology, tourism, sociology, business management, integrative sustainability, etc.) have better chances to ensure long term cultural benefits for both adults and children, especially when and where the educational discourse fails. These unique self-experience leisure activities, which offer everyone the opportunity to recreate history by him-/her-self, to relive the ancestors’ way of living, surviving and exploring should be regarded not as pseudo-scientific approaches but as important pre-steps to museum experiences. In order to support this theory, focus will be laid on two different examples: one dynamic, in the outdoors (the Boario Terme Archeopark from Italy) and one experimental, held indoor (the reconstruction of the Neolithic sanctuary of Parta, Romania as part of a transdisciplinary academic course) and their impact on young generations. The conclusion of this study shows that the increasingly lower engagement of youth (students) in discovering and understanding history, archaeology, and heritage can be revived by entrepreneurial projects.

Keywords: archeopark, educational tourism, open air museum, Parta sanctuary, prehistory

Procedia PDF Downloads 140
559 Effect of Wheat Germ Agglutinin- and Lactoferrin-Grafted Catanionic Solid Lipid Nanoparticles on Targeting Delivery of Etoposide to Glioblastoma Multiforme

Authors: Yung-Chih Kuo, I-Hsin Wang

Abstract:

Catanionic solid lipid nanoparticles (CASLNs) with surface wheat germ agglutinin (WGA) and lactoferrin (Lf) were formulated for entrapping and releasing etoposide (ETP), crossing the blood–brain barrier (BBB), and inhibiting the growth of glioblastoma multiforme (GBM). Microemulsified ETP-CASLNs were modified with WGA and Lf for permeating a cultured monolayer of human brain-microvascular endothelial cells (HBMECs) regulated by human astrocytes and for treating malignant U87MG cells. Experimental evidence revealed that an increase in the concentration of catanionic surfactant from 5 μM to 7.5 μM reduced the particle size. When the concentration of catanionic surfactant increased from 7.5 μM to 12.5 μM, the particle size increased, yielding a minimal diameter of WGA-Lf-ETP-CASLNs at 7.5 μM of catanionic surfactant. An increase in the weight percentage of BW from 25% to 75% enlarged WGA-Lf-ETP-CASLNs. In addition, an increase in the concentration of catanionic surfactant from 5 to 15 μM increased the absolute value of zeta potential of WGA-Lf-ETP-CASLNs. It was intriguing that the increment of the charge as a function of the concentration of catanionic surfactant was approximately linear. WGA-Lf-ETP-CASLNs revealed an integral structure with smooth particle contour, displayed a lighter exterior layer of catanionic surfactant, WGA, and Lf and showed a rigid interior region of solid lipids. A variation in the concentration of catanionic surfactant between 5 μM and 15 μM yielded a maximal encapsulation efficiency of ETP ata 7.5 μM of catanionic surfactant. An increase in the concentration of Lf/WGA decreased the grafting efficiency of Lf/WGA. Also, an increase in the weight percentage of ETP decreased its encapsulation efficiency. Moreover, the release rate of ETP from WGA-Lf-ETP-CASLNs reduced with increasing concentration of catanionic surfactant, and WGA-Lf-ETP-CASLNs at 12.5 μM of catanionic surfactant exhibited a feature of sustained release. The order in the viability of HBMECs was ETP-CASLNs ≅ Lf-ETP-CASLNs ≅ WGA-Lf-ETP-CASLNs > ETP. The variation in the transendothelial electrical resistance (TEER) and permeability of propidium iodide (PI) was negligible when the concentration of Lf increased. Furthermore, an increase in the concentration of WGA from 0.2 to 0.6 mg/mL insignificantly altered the TEER and permeability of PI. When the concentration of Lf increased from 2.5 to 7.5 μg/mL and the concentration of WGA increased from 2.5 to 5 μg/mL, the enhancement in the permeability of ETP was minor. However, 10 μg/mL of Lf promoted the permeability of ETP using Lf-ETP-CASLNs, and 5 and 10 μg/mL of WGA could considerably improve the permeability of ETP using WGA-Lf-ETP-CASLNs. The order in the efficacy of inhibiting U87MG cells was WGA-Lf-ETP-CASLNs > Lf-ETP-CASLNs > ETP-CASLNs > ETP. As a result, WGA-Lf-ETP-CASLNs reduced the TEER, enhanced the permeability of PI, induced a minor cytotoxicity to HBMECs, increased the permeability of ETP across the BBB, and improved the antiproliferative efficacy of U87MG cells. The grafting of WGA and Lf is crucial to control the medicinal property of ETP-CASLNs and WGA-Lf-ETP-CASLNs can be promising colloidal carriers in GBM management.

Keywords: catanionic solid lipid nanoparticle, etoposide, glioblastoma multiforme, lactoferrin, wheat germ agglutinin

Procedia PDF Downloads 237
558 Suitable Operating Conditions of Hot Water Generators Combined with Central Air Package Units: A Case Study of Tipco Building Group

Authors: Chalermporn Jindapeng

Abstract:

The main objective of the study of the suitable operating conditions of hot water generators combined with central air package units: a case study of Tipco Building Group was to analyze the suitable operating conditions and energy-related costs in each operating condition of hot water generators combined with central air package units which resulted in water-cooled packages. Thermal energy from vapor form refrigerants at high pressures and temperatures was exchanged with thermal energy of the water in the swimming pool that required suitable temperature control for users with the use of plate heat exchangers before refrigerants could enter the condenser in its function to change the status of vapor form refrigerants at high pressures and temperatures to liquid form at high pressures and temperatures. Thus, if this was used to replace heat pumps it could reduce the electrical energy that was used to make hot water and reduce the cost of the electrical energy of air package units including the increased efficacy of air package units. Of the analyses of the suitable operating conditions by means of the study of the elements involved with actual measurements from the system that had been installed at the Tipco Building Group hot water generators were combined with air package units which resulted in water-cooled packages with a cooling capacity of 75 tonnes. Plate heat exchangers were used in the transfer of thermal energy from refrigerants to one set of water with a heat exchanger area of 1.5 m² which was used to increase the temperature of swimming pool water that has a capacity of 240 m³. From experimental results, it was discovered after continuous temperature measurements in the swimming pool every 15 minutes that swimming pool water temperature increased by 0.78 ⁰C 0.75 ⁰C 0.74 ⁰C and 0.71 ⁰C. The rates of flow of hot water through the heat exchangers were equal to 14, 16, 18 and 20 litres per minute respectively where the swimming pool water temperature was at a constant value and when the rate of flow of hot water increased this caused hot water temperatures to decrease and the coefficient of performance of the air package units to increase from 5.9 to 6.3, 6.7, 6.9 and 7.6 while the rates of flow of hot water were equal to 14, 16, 18 and 20 litres per minute, respectively. As for the cooling systems, there were no changes and the system cooling functions were normal as the cooling systems were able to continuously transfer incoming heat for the swimming pool water which resulted in a constant pressure in the cooling system that allowed its cooling functions to work normally.

Keywords: central air package units, heat exchange, hot water generators, swimming pool

Procedia PDF Downloads 258
557 Enhancement of Fracture Toughness for Low-Temperature Applications in Mild Steel Weldments

Authors: Manjinder Singh, Jasvinder Singh

Abstract:

Existing theories of Titanic/Liberty ship, Sydney bridge accidents and practical experience generated an interest in developing weldments those has high toughness under sub-zero temperature conditions. The purpose was to protect the joint from undergoing DBT (Ductile to brittle transition), when ambient temperature reach sub-zero levels. Metallurgical improvement such as low carbonization or addition of deoxidization elements like Mn and Si was effective to prevent fracture in weldments (crack) at low temperature. In the present research, an attempt has been made to investigate the reason behind ductile to brittle transition of mild steel weldments when subjected to sub-zero temperatures and method of its mitigation. Nickel is added to weldments using manual metal arc welding (MMAW) preventing the DBT, but progressive reduction in charpy impact values as temperature is lowered. The variation in toughness with respect to nickel content being added to the weld pool is analyzed quantitatively to evaluate the rise in toughness value with increasing nickel amount. The impact performance of welded specimens was evaluated by Charpy V-notch impact tests at various temperatures (20 °C, 0 °C, -20 °C, -40 °C, -60 °C). Notch is made in the weldments, as notch sensitive failure is particularly likely to occur at zones of high stress concentration caused by a notch. Then the effect of nickel to weldments is investigated at various temperatures was studied by mechanical and metallurgical tests. It was noted that a large gain in impact toughness could be achieved by adding nickel content. The highest yield strength (462J) in combination with good impact toughness (over 220J at – 60 °C) was achieved with an alloying content of 16 wt. %nickel. Based on metallurgical behavior it was concluded that the weld metals solidify as austenite with increase in nickel. The microstructure was characterized using optical and high resolution SEM (scanning electron microscopy). At inter-dendritic regions mainly martensite was found. In dendrite core regions of the low carbon weld metals a mixture of upper bainite, lower bainite and a novel constituent coalesced bainite formed. Coalesced bainite was characterized by large bainitic ferrite grains with cementite precipitates and is believed to form when the bainite and martensite start temperatures are close to each other. Mechanical properties could be rationalized in terms of micro structural constituents as a function of nickel content.

Keywords: MMAW, Toughness, DBT, Notch, SEM, Coalesced bainite

Procedia PDF Downloads 526
556 Consumer Reactions to Hospitality Social Robots Across Cultures

Authors: Lisa C. Wan

Abstract:

To address customers’ safety concerns, more and more hospitality companies are using service robots to provide contactless services. For many companies, the switch from human employees to service robots to lower the contagion risk during and after the pandemic may be permanent. The market size for hospitality service robots is estimated to reach US$3,083 million by 2030, registering a CAGR of 25.5% from 2021 to 2030. While service robots may effectively reduce interpersonal contacts and health risk, it also eliminates the social interactions desired by customers. A recent survey revealed that more than 60% of Americans feel lonely during the pandemic. People who are traveling can also feel isolated when they are at a hotel far away from home. It is therefore important for the hospitality companies to understand whether and how social robots can remedy deprived social connection not only due to a pandemic but also for a trip away from home in the post-pandemic future. This study complements extant hospitality literature regarding service robots by examining how service robots can forge social connections with customers. The service robots we are concerned with are those that can interact and communicate with humans; we broadly refer to them as social robots. We define a social robot as one that is equipped with interaction capabilities – it can either be one that directly interacts with the consumer or one through which the consumer can interact with other humans. Drawing on the theories of mind perception, we propose that service robots can foster social connectedness and increase the perception of social competence of the robot, but these effects will vary across cultures. By applying theories of mind perception and cultural dimension to the hospitality setting, this study shows that service robots that are equipped with social connection function will receive a more favorable evaluation from the consumers and enhance their intention to visit a hotel. The more favorable reaction to social robots is stronger for collectivists (i.e., Asians) than individualists (i.e., Westerners). To our knowledge, this is among the first studies to investigate the impact of culture on consumer reactions to social robots in the hospitality and tourism context. Moreover, this research extends the literature by examining whether people imbue non-human entities (i.e., telepresence social robots) with social competence. Because social robots that foster social connection with humans are still rare in hospitality and tourism, this aspect is an underexplored research area. Our study is the first to propose that, just like their human counterparts that possess relevant social skills, social robots’ interaction capabilities (e.g., telepresence robots) are used to infer social competence. More studies will be conducted to examine consumer reactions to humanoid (vs. non-humanoid) robot in the hospitality settings to generalize our research findings.

Keywords: service robots, COVID-19, social connection, cultures

Procedia PDF Downloads 103
555 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container

Authors: Mohammad R. Jalali

Abstract:

Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.

Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions

Procedia PDF Downloads 400
554 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose

Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng

Abstract:

Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.

Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model

Procedia PDF Downloads 359
553 A Preliminary Study on the Effects of Lung Impact on Ballistic Thoracic Trauma

Authors: Amy Pullen, Samantha Rodrigues, David Kieser, Brian Shaw

Abstract:

The aim of the study was to determine if a projectile interacting with the lungs increases the severity of injury in comparison to a projectile interacting with the ribs or intercostal muscle. This comparative study employed a 10% gelatine based model with either porcine ribs or balloons embedded to represent a lung. Four sample groups containing five samples were evaluated; these were control (plain gel), intercostal impact, rib impact, and lung impact. Two ammunition natures were evaluated at a range of 10m; these were 5.56x45mm and 7.62x51mm. Aspects of projectile behavior were quantified including exiting projectile weight, location of yawing, projectile fragmentation and distribution, location and area of the temporary cavity, permanent cavity formation, and overall energy deposition. Major findings included the cavity showing a higher percentage of the projectile weight exit the block than the intercostal and ribs, but similar to the control for the 5.56mm ammunition. However, for the 7.62mm ammunition, the lung was shown to have a higher percentage of the projectile weight exit the block than the control, intercostal and ribs. The total weight of projectile fragments as a function of penetration depth revealed large fluctuations and significant intra-group variation for both ammunition natures. Despite the lack of a clear trend, both plots show that the lung leads to greater projectile fragments exiting the model. The lung was shown to have a later center of the temporary cavity than the control, intercostal and ribs for both ammunition types. It was also shown to have a similar temporary cavity volume to the control, intercostal and ribs for the 5.56mm ammunition and a similar temporary cavity to the intercostal for the 7.62mm ammunition The lung was shown to leave a similar projectile tract than the control, intercostal and ribs for both ammunition types. It was also shown to have larger shear planes than the control and the intercostal, but similar to the ribs for the 5.56mm ammunition, whereas it was shown to have smaller shear planes than the control but similar shear planes to the intercostal and ribs for the 7.62mm ammunition. The lung was shown to have less energy deposited than the control, intercostal and ribs for both ammunition types. This comparative study provides insights into the influence of the lungs on thoracic gunshot trauma. It indicates that the lungs limits projectile deformation and causes a later onset of yawing and subsequently limits the energy deposited along the wound tract creating a deeper and smaller cavity. This suggests that lung impact creates an altered pattern of local energy deposition within the target which will affect the severity of trauma.

Keywords: ballistics, lung, trauma, wounding

Procedia PDF Downloads 172
552 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 200