Search results for: types of cooperatives
962 Attention and Creative Problem-Solving: Cognitive Differences between Adults with and without Attention Deficit Hyperactivity Disorder
Authors: Lindsey Carruthers, Alexandra Willis, Rory MacLean
Abstract:
Introduction: It has been proposed that distractibility, a key diagnostic criterion of Attention Deficit Hyperactivity Disorder (ADHD), may be associated with higher creativity levels in some individuals. Anecdotal and empirical evidence has shown that ADHD is therefore beneficial to creative problem-solving, and the generation of new ideas and products. Previous studies have only used one or two measures of attention, which is insufficient given that it is a complex cognitive process. The current study aimed to determine in which ways performance on creative problem-solving tasks and a range of attention tests may be related, and if performance differs between adults with and without ADHD. Methods: 150 adults, 47 males and 103 females (mean age=28.81 years, S.D.=12.05 years), were tested at Edinburgh Napier University. Of this set, 50 participants had ADHD, and 100 did not, forming the control group. Each participant completed seven attention tasks, assessing focussed, sustained, selective, and divided attention. Creative problem-solving was measured using divergent thinking tasks, which require multiple original solutions for one given problem. Two types of divergent thinking task were used: verbal (requires written responses) and figural (requires drawn responses). Each task is scored for idea originality, with higher scores indicating more creative responses. Correlational analyses were used to explore relationships between attention and creative problem-solving, and t-tests were used to study the between group differences. Results: The control group scored higher on originality for figural divergent thinking (t(148)= 3.187, p< .01), whereas the ADHD group had more original ideas for the verbal divergent thinking task (t(148)= -2.490, p < .05). Within the control group, figural divergent thinking scores were significantly related to both selective (r= -.295 to -.285, p < .01) and divided attention (r= .206 to .290, p < .05). Alternatively, within the ADHD group, both selective (r= -.390 to -.356, p < .05) and divided (r= .328 to .347, p < .05) attention are related to verbal divergent thinking. Conclusions: Selective and divided attention are both related to divergent thinking, however the performance patterns are different between each group, which may point to cognitive variance in the processing of these problems and how they are managed. The creative differences previously found between those with and without ADHD may be dependent on task type, which to the author’s knowledge, has not been distinguished previously. It appears that ADHD does not specifically lead to higher creativity, but may provide explanation for creative differences when compared to those without the disorder.Keywords: ADHD, attention, creativity, problem-solving
Procedia PDF Downloads 456961 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 169960 Classification of Emotions in Emergency Call Center Conversations
Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko
Abstract:
The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning
Procedia PDF Downloads 398959 Smart Construction Sites in KSA: Challenges and Prospects
Authors: Ahmad Mohammad Sharqi, Mohamed Hechmi El Ouni, Saleh Alsulamy
Abstract:
Due to the emerging technologies revolution worldwide, the need to exploit and employ innovative technologies for other functions and purposes in different aspects has become a remarkable matter. Saudi Arabia is considered one of the most powerful economic countries in the world, where the construction sector participates effectively in its economy. Thus, the construction sector in KSA should convoy the rapid digital revolution and transformation and implement smart devices on sites. A Smart Construction Site (SCS) includes smart devices, artificial intelligence, the internet of things, augmented reality, building information modeling, geographical information systems, and cloud information. This paper aims to study the level of implementation of SCS in KSA, analyze the obstacles and challenges of adopting SCS and find out critical success factors for its implementation. A survey of close-ended questions (scale and multi-choices) has been conducted on professionals in the construction sector of Saudi Arabia. A total number of twenty-nine questions has been prepared for respondents. Twenty-four scale questions were established, and those questions were categorized into several themes: quality, scheduling, cost, occupational safety and health, technologies and applications, and general perception. Consequently, the 5-point Likert scale tool (very low to very high) was adopted for this survey. In addition, five close-ended questions with multi-choice types have also been prepared; these questions have been derived from a previous study implemented in the United Kingdom (UK) and the Dominic Republic (DR), these questions have been rearranged and organized to fit the structured survey in order to place the Kingdom of Saudi Arabia in comparison with the United Kingdom (UK) as well as the Dominican Republic (DR). A total number of one hundred respondents have participated in this survey from all regions of the Kingdom of Saudi Arabia: southern, central, western, eastern, and northern regions. The drivers, obstacles, and success factors for implementing smart devices and technologies in KSA’s construction sector have been investigated and analyzed. Besides, it has been concluded that KSA is on the right path toward adopting smart construction sites with attractive results comparable to and even better than the UK in some factors.Keywords: artificial intelligence, construction projects management, internet of things, smart construction sites, smart devices
Procedia PDF Downloads 155958 Modeling the Downstream Impacts of River Regulation on the Grand Lake Meadows Complex using Delft3D FM Suite
Authors: Jaime Leavitt, Katy Haralampides
Abstract:
Numerical modelling has been used to investigate the long-term impact of a large dam on downstream wetland areas, specifically in terms of changing sediment dynamics in the system. The Mactaquac Generating Station (MQGS) is a 672MW run-of-the-river hydroelectric facility, commissioned in 1968 on the mainstem of the Wolastoq|Saint John River in New Brunswick, Canada. New Brunswick Power owns and operates the dam and has been working closely with the Canadian Rivers Institute at UNB Fredericton on a multi-year, multi-disciplinary project investigating the impact the dam has on its surrounding environment. With focus on the downstream river, this research discusses the initialization, set-up, calibration, and preliminary results of a 2-D hydrodynamic model using the Delft3d Flexible Mesh Suite (successor of the Delft3d 4 Suite). The flexible mesh allows the model grid to be structured in the main channel and unstructured in the floodplains and other downstream regions with complex geometry. The combination of grid types improves computational time and output. As the movement of water governs the movement of sediment, the calibrated and validated hydrodynamic model was applied to sediment transport simulations, particularly of the fine suspended sediments. Several provincially significant Protected Natural Areas and federally significant National Wildlife Areas are located 60km downstream of the MQGS. These broad, low-lying floodplains and wetlands are known as the Grand Lake Meadows Complex (GLM Complex). There is added pressure to investigate the impacts of river regulation on these protected regions that rely heavily on natural river processes like sediment transport and flooding. It is hypothesized that the fine suspended sediment would naturally travel to the floodplains for nutrient deposition and replenishment, particularly during the freshet and large storms. The purpose of this research is to investigate the impacts of river regulation on downstream environments and use the model as a tool for informed decision making to protect and maintain biologically productive wetlands and floodplains.Keywords: hydrodynamic modelling, national wildlife area, protected natural area, sediment transport.
Procedia PDF Downloads 6957 The Response of Adaptive Mechanism of Fluorescent Proteins from Coral Species and Target Cell Properties on Signalling Capacity as Biosensor
Authors: Elif Tugce Aksun Tumerkan
Abstract:
Fluorescent proteins (FPs) have become very popular since green fluorescent protein discovered from crystal jellyfish. It is known that Anthozoa species have a wide range of chromophore organisms, and the initial crystal structure for non-fluorescent chromophores obtained from the reef-building coral has been determined. There are also differently coloured pigments in non-bioluminescent Anthozoa zooxanthellate and azooxanthellate which are frequently members of the GFP-like protein family. The development of fluorescent proteins (FPs) and their applications is an outstanding example of basic science leading to practical biotechnological and medical applications. Fluorescent proteins have several applications in science and are used as important indicators in molecular biology and cell-based research. With rising interest in cell biology, FPs have used as biosensor indicators and probes in pharmacology and cell biology. Using fluorescent proteins in genetically encoded metabolite sensors has many advantages than chemical probes for metabolites such as easily introduced into any cell or organism in any sub-cellular localization and giving chance to fixing to fluoresce of different colours or characteristics. There are different factors effects to signalling mechanism when they used as a biosensor. While there are wide ranges of research have been done on the significance and applications of fluorescent proteins, the cell signalling response of FPs and target cell are less well understood. In this study, it was aimed to clarify the response of adaptive mechanisms of coral species such as pH, temperature and symbiotic relationship and target cells properties on the signalling capacity. Corals are a rich natural source of fluorescent proteins that change with environmental conditions such as light, heat stress and injury. Adaptation mechanism of coral species to these types of environmental variations is important factor due to FPs properties have affected by this mechanism. Since fluorescent proteins obtained from nature, their own ecological property like the symbiotic relationship is observed very commonly in coral species and living conditions have the impact on FPs efficiency. Target cell properties also have an effect on signalling and visualization. The dynamicity of detector that used for reading fluorescence and the level of background fluorescence are key parameters for the quality of the fluorescent signal. Among the factors, it can be concluded that coral species adaptive characteristics have the strongest effect on FPs signalling capacity.Keywords: biosensor, cell biology, environmental conditions, fluorescent protein, sea anemone
Procedia PDF Downloads 169956 Spectrophotometric Detection of Histidine Using Enzyme Reaction and Examination of Reaction Conditions
Authors: Akimitsu Kugimiya, Kouhei Iwato, Toru Saito, Jiro Kohda, Yasuhisa Nakano, Yu Takano
Abstract:
The measurement of amino acid content is reported to be useful for the diagnosis of several types of diseases, including lung cancer, gastric cancer, colorectal cancer, breast cancer, prostate cancer, and diabetes. The conventional detection methods for amino acid are high-performance liquid chromatography (HPLC) and liquid chromatography-mass spectrometry (LC-MS), but they have several drawbacks as the equipment is cumbersome and the techniques are costly in terms of time and costs. In contrast, biosensors and biosensing methods provide more rapid and facile detection strategies that use simple equipment. The authors have reported a novel approach for the detection of each amino acid that involved the use of aminoacyl-tRNA synthetase (aaRS) as a molecular recognition element because aaRS is expected to a selective binding ability for corresponding amino acid. The consecutive enzymatic reactions used in this study are as follows: aaRS binds to its cognate amino acid and releases inorganic pyrophosphate. Hydrogen peroxide (H₂O₂) was produced by the enzyme reactions of inorganic pyrophosphatase and pyruvate oxidase. The Trinder’s reagent was added into the reaction mixture, and the absorbance change at 556 nm was measured using a microplate reader. In this study, an amino acid-sensing method using histidyl-tRNA synthetase (HisRS; histidine-specific aaRS) as molecular recognition element in combination with the Trinder’s reagent spectrophotometric method was developed. The quantitative performance and selectivity of the method were evaluated, and the optimal enzyme reaction and detection conditions were determined. The authors developed a simple and rapid method for detecting histidine with a combination of enzymatic reaction and spectrophotometric detection. In this study, HisRS was used to detect histidine, and the reaction and detection conditions were optimized for quantitation of these amino acids in the ranges of 1–100 µM histidine. The detection limits are sufficient to analyze these amino acids in biological fluids. This work was partly supported by Hiroshima City University Grant for Special Academic Research (General Studies).Keywords: amino acid, aminoacyl-tRNA synthetase, biosensing, enzyme reaction
Procedia PDF Downloads 284955 Prevalence of ESBL E. coli Susceptibility to Oral Antibiotics in Outpatient Urine Culture: Multicentric, Analysis of Three Years Data (2019-2021)
Authors: Mazoun Nasser Rashid Al Kharusi, Nada Al Siyabi
Abstract:
Objectives: The main aim of this study is to Find the rate of susceptibility of ESBL E. coli causing UTI to oral antibiotics. Secondary objectives: Prevalence of ESBL E. coli from community urine samples, identify the best empirical oral antibiotics with the least resistance rate for UTI and identify alternative oral antibiotics for testing and utilization. Methods: This study is a retrospective descriptive study of the last three years in five major hospitals in Oman (Khowla Hospital, AN’Nahdha Hospital, Rustaq Hospital, Nizwa Hospital, and Ibri Hospital) equipped with a microbiologist. Inclusion criteria include all eligible outpatient urine culture isolates, excluding isolates from admitted patients with hospital-acquired urinary tract infections. Data was collected through the MOH database. The MOH hospitals are using different types of testing, automated methods like Vitek2 and manual methods. Vitek2 machine uses the principle of the fluorogenic method for organism identification and a turbidimetric method for susceptibility testing. The manual method is done by double disc diffusion for identifying ESBL and the disc diffusion method is for antibiotic susceptibility. All laboratories follow the clinical laboratory science institute (CLSI) guidelines. Analysis was done by SPSS statistical package. Results: Total urine cultures were (23048). E. coli grew in (11637) 49.6% of the urine, whereas (2199) 18.8% of those were confirmed as ESBL. As expected, the resistance rate to amoxicillin and cefuroxime is 100%. Moreover, the susceptibility of those ESBL-producing E. coli to nitrofurantoin, trimethoprim+sulfamethoxazole, ciprofloxacin and amoxicillin-clavulanate is progressing over the years; however, still low. ESBL E. coli was predominating in the female gender and those aged 66-74 years old throughout all the years. Other oral antibiotic options need to be explored and tested so that we add to the pool of oral antibiotics for ESBL E. coli causing UTI in the community. Conclusion: High rate of ESBL E. coli in urine from the community. The high resistance rates to oral antibiotics highlight the need for alternative treatment options for UTIs caused by these bacteria. Further research is needed to identify new and effective treatments for UTIs caused by ESBL-E. Coli.Keywords: UTI, ESBL, oral antibiotics, E. coli, susceptibility
Procedia PDF Downloads 93954 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions
Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani
Abstract:
Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.Keywords: curing, durability, maturity, strength
Procedia PDF Downloads 301953 A Modest Proposal for Deep-Sixing Propositions in the Philosophy of Language
Authors: Patrick Duffley
Abstract:
Hanks (2021) identifies three Frege-inspired commitments concerning propositions that are widely shared across the philosophy of language: (1) propositions are the primary, inherent bearers of representational properties and truth-conditions; (2) propositions are neutral representations possessing a ‘content’ that is devoid of ‘force; (3) propositions can be entertained or expressed without being asserted. Hanks then argues that the postulate of neutral content must be abandoned, and the primary bearers of truth-evaluable representation must be identified as the token acts of assertoric predication that people perform when they are thinking or speaking about the world. Propositions are ‘types of acts of predication, which derive their representational features from their tokens.’ Their role is that of ‘classificatory devices that we use for the purposes of identifying and individuating mental states and speech acts,’ so that ‘to say that Russell believes that Mont Blanc is over 4000 meters high is to classify Russell’s mental state under a certain type, and thereby distinguish that mental state from others that Russell might possess.’ It is argued in this paper that there is no need to classify an utterance of 'Russell believes that Mont Blanc is over 4000 meters high' as a token of some higher-order utterance-type in order to identify what Russell believes; the meanings of the words themselves and the syntactico-semantic relations between them are sufficient. In our view what Hanks has accomplished in effect is to build a convincing argument for dispensing with propositions completely in the philosophy of language. By divesting propositions of the role of being the primary bearers of representational properties and truth-conditions and fittingly transferring this role to the token acts of predication that people perform when they are thinking or speaking about the world, he has situated truth in its proper place and obviated any need for abstractions like propositions to explain how language can express things that are true. This leaves propositions with the extremely modest role of classifying mental states and speech acts for the purposes of identifying and individuating them. It is demonstrated here however that there is no need whatsoever to posit such abstract entities to explain how people identify and individuate such states/acts. We therefore make the modest proposal that the term ‘proposition’ be stricken from the vocabulary of philosophers of language.Keywords: propositions, truth-conditions, predication, Frege, truth-bearers
Procedia PDF Downloads 66952 Optimum Method to Reduce the Natural Frequency for Steel Cantilever Beam
Authors: Eqqab Maree, Habil Jurgen Bast, Zana K. Shakir
Abstract:
Passive damping, once properly characterized and incorporated into the structure design is an autonomous mechanism. Passive damping can be achieved by applying layers of a polymeric material, called viscoelastic layers (VEM), to the base structure. This type of configuration is known as free or unconstrained layer damping treatment. A shear or constrained damping treatment uses the idea of adding a constraining layer, typically a metal, on top of the polymeric layer. Constrained treatment is a more efficient form of damping than the unconstrained damping treatment. In constrained damping treatment a sandwich is formed with the viscoelastic layer as the core. When the two outer layers experience bending, as they would if the structure was oscillating, they shear the viscoelastic layer and energy is dissipated in the form of heat. This form of energy dissipation allows the structural oscillations to attenuate much faster. The purpose behind this study is to predict damping effects by using two methods of passive viscoelastic constrained layer damping. First method is Euler-Bernoulli beam theory; it is commonly used for predicting the vibratory response of beams. Second method is Finite Element software packages provided in this research were obtained by using two-dimensional solid structural elements in ANSYS14 specifically eight nodded (SOLID183) and the output results from ANSYS 14 (SOLID183) its damped natural frequency values and mode shape for first five modes. This method of passive damping treatment is widely used for structural application in many industries like aerospace, automobile, etc. In this paper, take a steel cantilever sandwich beam with viscoelastic core type 3M-468 by using methods of passive viscoelastic constrained layer damping. Also can proved that, the percentage reduction of modal frequency between undamped and damped steel sandwich cantilever beam 8mm thickness for each mode is very high, this is due to the effect of viscoelastic layer on damped beams. Finally this types of damped sandwich steel cantilever beam with viscoelastic materials core type (3M468) is very appropriate to use in automotive industry and in many mechanical application, because has very high capability to reduce the modal vibration of structures.Keywords: steel cantilever, sandwich beam, viscoelastic materials core type (3M468), ANSYS14, Euler-Bernoulli beam theory
Procedia PDF Downloads 318951 Crafting of Paper Cutting Techniques for Embellishment of Fashion Textiles
Authors: A. Vaidya-Soocheta, K. M. Wong-Hon-Lang
Abstract:
Craft and fashion have always been interlinked. The combination of both often gives stunning results. The present study introduces ‘Paper Cutting Craft Techniques’ like the Japanese –Kirigami, Mexican –PapelPicado, German –Scherenschnitte, Polish –Wycinankito in textiles to develop innovative and novel design structures as embellishments and ornamentation. The project studies various ways of using these paper cutting techniques to obtain interesting features and delicate design patterns on fabrics. While paper has its advantages and related uses, it is fragile rigid and thus not appropriate for clothing. Fabric is sturdy, flexible, dimensionally stable and washable. In the present study, the cut out techniques develop creative design motifs and patterns to give an inventive and unique appeal to the fabrics. The beauty and fascination of lace in garments have always given them a nostalgic charm. Laces with their intricate and delicate complexity in combination with other materials add a feminine touch to a garment and give it a romantic, mysterious appeal. Various textured and decorative effects through fabric manipulation are experimented along with the use of paper cutting craft skills as an innovative substitute for developing lace or “Broderie Anglaise” effects on textiles. A number of assorted fabric types with varied textures were selected for the study. Techniques to avoid fraying and unraveling of the design cut fabrics were introduced. Fabrics were further manipulated by use of interesting prints with embossed effects on cut outs. Fabric layering in combination with assorted techniques such as cutting of folded fabric, printing, appliqué, embroidery, crochet, braiding, weaving added a novel exclusivity to the fabrics. The fabrics developed by these innovative methods were then tailored into garments. The study thus tested the feasibility and practicability of using these fabrics by designing a collection of evening wear garments based on the theme ‘Nostalgia’. The prototypes developed were complemented by designing fashion accessories with the crafted fabrics. Prototypes of accessories add interesting features to the study. The adaptation and application of this novel technique of paper cutting craft on textiles can be an innovative start for a new trend in textile and fashion industry. The study anticipates that this technique will open new avenues in the world of fashion to incorporate its use commercially.Keywords: collection, fabric cutouts, nostalgia, prototypes
Procedia PDF Downloads 357950 Managing of Cobalt and Chromium Ions by Patients with Metal-on-Metal Hip Prosthesis
Authors: Alina Beraudi, Simona Catalani, Dalila De Pasquale, Eva Bianconi, Umberto Santoro, Susanna Stea, Pietro Apostoli
Abstract:
Recently the European Community, in line with the international scientific community such as with the Consensus Statement, has determined to stop the use of metal-on-metal big head stemmed hip prosthesis. Among the factors accounted as responsible for the high failure rates of these hip implants are the release and accumulation of metal ions. Many studies have correlated the presence of these ions, besides other factors, with the induction of oxidative stress response. In our study on 12 subjects, we observed the patient specific capability to eliminate metal ions after revision surgery. While for cobalt all the patients were able to completely excrete cobalt ions within 5-7 months after metal-on-metal bearing removal, for chromium ions it didn’t happen. If on the one hand the toxicokinetic differences between the two types of ions are confirmed by toxicological and occupational studies, on the other hand, this peculiar way of exposition represents a novel and important point of view. Thus, two different approaches were performed to better understand the subject specific capability to transport metal ions (albumin study) and to manage the response to them (heme-oxygenase-1 study): - a mutational screening of ALBUMIN gene was conducted in 30 MoM prosthetic patients resulting in the absence of nucleotidic changes compared with the ALB reference sequence. To this study was also added the analysis of expression of modified albumin protein; - a gene and protein expression study on 44 patients of heme-oxygenase-1, that is one of the most important antioxidant enzyme induced by metallic ions, was performed. This study resulted in no statistically significant differences in the expression of the gene and protein heme-oxygenase-1 between prosthetic and non-prosthetic patients, as well as between patients with high and low ions levels. Our results show that the protein studied (albumin and heme-oxygenase-1) seem to be not involved in determining chromium and cobalt ions level. On the other hand, achromium and cobalt elimination rates are different, but similar in all patients analyzed, suggesting that this process could be not patient-related. We support the importance of researching more about ions transport within the organism once released by hip prosthesis, about the chemical species involved, the districts where they are contained and the mechanisms of elimination, not excluding the existence of a subjective susceptibility to these metals ions.Keywords: chromium, cobalt, hip prosthesis, individual susceptibility
Procedia PDF Downloads 383949 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste
Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera
Abstract:
Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste
Procedia PDF Downloads 156948 A Small-Scale Survey on Risk Factors of Musculoskeletal Disorders in Workers of Logistics Companies in Cyprus and on the Early Adoption of Industrial Exoskeletons as Mitigation Measure
Authors: Kyriacos Clerides, Panagiotis Herodotou, Constantina Polycarpou, Evagoras Xydas
Abstract:
Background: Musculoskeletal disorders (MSDs) in the workplace is a very common problem in Europe which are caused by multiple risk factors. In recent years, wearable devices and exoskeletons for the workplace have been trying to address the various risk factors that are associated with strenuous tasks in the workplace. The logistics sector is a huge sector that includes warehousing, storage, and transportation. However, the task associated with logistics is not well-studied in terms of MSDs risk. This study was aimed at looking into the MSDs affecting workers of logistics companies. It compares the prevalence of MSDs among workers and evaluates multiple risk factors that contribute to the development of MSDs. Moreover, this study seeks to obtain user feedback on the adoption of exoskeletons in such a work environment. Materials and Methods: The study was conducted among workers in logistics companies in Nicosia, Cyprus, from July to September 2022. A set of standardized questionnaires was used for collecting different types of data. Results: A high proportion of logistics professionals reported MSDs in one or more other body regions, the lower back being the most commonly affected area. Working in the same position for long periods, working in awkward postures, and handling an excessive load, were found to be the most commonly reported job risk factor that contributed to the development of MSDs, in this study. A significant number of participants consider the back region as the most to be benefited from a wearable exoskeleton device. Half of the participants would like to have at least a 50% reduction in their daily effort. The most important characteristics for the adoption of exoskeleton devices were found to be how comfortable the device is and its weight. Conclusion: Lower back and posture were the highest risk factors among all logistics professionals assessed in this study. A larger scale study using quantitative analytical tools may give a more accurate estimate of MSDs, which would pave the way for making more precise recommendations to eliminate the risk factors and thereby prevent MSDs. A follow-up study using exoskeletons in the workplace should be done to assess whether they assist in MSD prevention.Keywords: musculoskeletal disorders, occupational health, safety, occupational risk, logistic companies, workers, Cyprus, industrial exoskeletons, wearable devices
Procedia PDF Downloads 107947 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated
Authors: Nayara Torres B. Acioli, José Fernando T. Jucá
Abstract:
The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.Keywords: oxidative coverage layer, permeability, suction, saturation
Procedia PDF Downloads 289946 Sharing Personal Information for Connection: The Effect of Social Exclusion on Consumer Self-Disclosure to Brands
Authors: Jiyoung Lee, Andrew D. Gershoff, Jerry Jisang Han
Abstract:
Most extant research on consumer privacy concerns and their willingness to share personal data has focused on contextual factors (e.g., types of information collected, type of compensation) that lead to consumers’ personal information disclosure. Unfortunately, the literature lacks a clear understanding of how consumers’ incidental psychological needs may influence consumers’ decisions to share their personal information with companies or brands. In this research, we investigate how social exclusion, which is an increasing societal problem, especially since the onset of the COVID-19 pandemic, leads to increased information disclosure intentions for consumers. Specifically, we propose and find that when consumers become socially excluded, their desire for social connection increases, and this desire leads to a greater willingness to disclose their personal information with firms. The motivation to form and maintain interpersonal relationships is one of the most fundamental human needs, and many researchers have found that deprivation of belongingness has negative consequences. Given the negative effects of social exclusion and the universal need to affiliate with others, people respond to exclusion with a motivation for social reconnection, resulting in various cognitive and behavioral consequences, such as paying greater attention to social cues and conforming to others. Here, we propose personal information disclosure as another form of behavior that can satisfy such social connection needs. As self-disclosure can serve as a strategic tool in creating and developing social relationships, those who have been socially excluded and thus have greater social connection desires may be more willing to engage in self-disclosure behavior to satisfy such needs. We conducted four experiments to test how feelings of social exclusion can influence the extent to which consumers share their personal information with brands. Various manipulations and measures were used to demonstrate the robustness of our effects. Through the four studies, we confirmed that (1) consumers who have been socially excluded show greater willingness to share their personal information with brands and that (2) such an effect is driven by the excluded individuals’ desire for social connection. Our findings shed light on how the desire for social connection arising from exclusion influences consumers’ decisions to disclose their personal information to brands. We contribute to the consumer disclosure literature by uncovering a psychological need that influences consumers’ disclosure behavior. We also extend the social exclusion literature by demonstrating that exclusion influences not only consumers’ choice of products but also their decision to disclose personal information to brands.Keywords: consumer-brand relationship, consumer information disclosure, consumer privacy, social exclusion
Procedia PDF Downloads 123945 New Knowledge Co-Creation in Mobile Learning: A Classroom Action Research with Multiple Case Studies Using Mobile Instant Messaging
Authors: Genevieve Lim, Arthur Shelley, Dongcheol Heo
Abstract:
Abstract—Mobile technologies can enhance the learning process as it enables social engagement around concepts beyond the classroom and the curriculum. Early results in this ongoing research is showing that when learning interventions are designed specifically to generate new insights, mobile devices support regulated learning and encourage learners to collaborate, socialize and co-create new knowledge. As students navigate across the space and time boundaries, the fundamental social nature of learning transforms into mobile computer supported collaborative learning (mCSCL). The metacognitive interaction in mCSCL via mobile applications reflects the regulation of learning among the students. These metacognitive experiences whether self-, co- or shared-regulated are significant to the learning outcomes. Despite some insightful empirical studies, there has not yet been significant research that investigates the actual practice and processes of the new knowledge co-creation. This leads to question as to whether mobile learning provides a new channel to leverage learning? Alternatively, does mobile interaction create new types of learning experiences and how do these experiences co-create new knowledge. The purpose of this research is to explore these questions and seek evidence to support one or the other. This paper addresses these questions from the students’ perspective to understand how students interact when constructing knowledge in mCSCL and how students’ self-regulated learning (SRL) strategies support the co-creation of new knowledge in mCSCL. A pilot study has been conducted among international undergraduates to understand students’ perspective of mobile learning and concurrently develops a definition in an appropriate context. Using classroom action research (CAR) with multiple case studies, this study is being carried out in a private university in Thailand to narrow the research gaps in mCSCL and SRL. The findings will allow teachers to see the importance of social interaction for meaningful student engagement and envisage learning outcomes from a knowledge management perspective and what role mobile devices can play in these. The findings will signify important indicators for academics to rethink what is to be learned and how it should be learned. Ultimately, the study will bring new light into the co-creation of new knowledge in a social interactive learning environment and challenges teachers to embrace the 21st century of learning with mobile technologies to deepen and extend learning opportunities.Keywords: mobile computer supported collaborative learning, mobile instant messaging, mobile learning, new knowledge co-creation, self-regulated learning
Procedia PDF Downloads 232944 In vitro Establishment and Characterization of Oral Squamous Cell Carcinoma Derived Cancer Stem-Like Cells
Authors: Varsha Salian, Shama Rao, N. Narendra, B. Mohana Kumar
Abstract:
Evolving evidence proposes the existence of a highly tumorigenic subpopulation of undifferentiated, self-renewing cancer stem cells, responsible for exhibiting resistance to conventional anti-cancer therapy, recurrence, metastasis and heterogeneous tumor formation. Importantly, the mechanisms exploited by cancer stem cells to resist chemotherapy are very less understood. Oral squamous cell carcinoma (OSCC) is one of the most regularly diagnosed cancer types in India and is associated commonly with alcohol and tobacco use. Therefore, the isolation and in vitro characterization of cancer stem-like cells from patients with OSCC is a critical step to advance the understanding of the chemoresistance processes and for designing therapeutic strategies. With this, the present study aimed to establish and characterize cancer stem-like cells in vitro from OSCC. The primary cultures of cancer stem-like cell lines were established from the tissue biopsies of patients with clinical evidence of an ulceroproliferative lesion and histopathological confirmation of OSCC. The viability of cells assessed by trypan blue exclusion assay showed more than 95% at passage 1 (P1), P2 and P3. Replication rate was performed by plating cells in 12-well plate and counting them at various time points of culture. Cells had a more marked proliferative activity and the average doubling time was less than 20 hrs. After being cultured for 10 to 14 days, cancer stem-like cells gradually aggregated and formed sphere-like bodies. More spheroid bodies were observed when cultured in DMEM/F-12 under low serum conditions. Interestingly, cells with higher proliferative activity had a tendency to form more sphere-like bodies. Expression of specific markers, including membrane proteins or cell enzymes, such as CD24, CD29, CD44, CD133, and aldehyde dehydrogenase 1 (ALDH1) is being explored for further characterization of cancer stem-like cells. To summarize the findings, the establishment of OSCC derived cancer stem-like cells may provide scope for better understanding the cause for recurrence and metastasis in oral epithelial malignancies. Particularly, identification and characterization studies on cancer stem-like cells in Indian population seem to be lacking thus provoking the need for such studies in a population where alcohol consumption and tobacco chewing are major risk habits.Keywords: cancer stem-like cells, characterization, in vitro, oral squamous cell carcinoma
Procedia PDF Downloads 221943 Flora of Seaweeds and the Preliminary Screening of the Fungal Endophytes
Authors: Nur Farah Ain Zainee, Ahmad Ismail, Nazlina Ibrahim, Asmida Ismail
Abstract:
Seaweeds are economically important as they have the potential of being utilized, the capabilities and opportunities for further expansion as well as the availability of other species for future development. Hence, research on the diversity and distribution of seaweeds have to be expanded whilst the seaweeds are one of the Malaysian marine valuable heritage. The study on the distribution of seaweeds at Pengerang, Johor was carried out between February and November 2015 at Kampung Jawa Darat and Kampung Sungai Buntu. The study sites are located at the south-southeast of Peninsular Malaysia where the Petronas Refinery and Petrochemicals Integrated Project Development (RAPID) are in progress. In future, the richness of seaweeds in Pengerang will vanish soon due to the loss of habitat prior to RAPID project. The research was completed to study the diversity of seaweed and to determine the present of fungal endophyte isolated from the seaweed. The sample was calculated by using quadrat with 25-meter line transect by 3 replication for each site. The specimen were preserved, identified, processed in the laboratory and kept as herbarium specimen in Algae Herbarium, Universiti Kebangsaan Malaysia. The complete thallus specimens for fungal endophyte screening were chosen meticulously, transferred into sterile zip-lock plastic bag and kept in the freezer for further process. A total of 29 species has been identified including 12 species of Chlorophyta, 2 species of Phaeophyta and 14 species of Rhodophyta. From February to November 2015, the number of species highly varied and there was a significant change in community structure of seaweeds. Kampung Sungai Buntu shows the highest diversity throughout the study compared to Kampung Jawa Darat. This evidence can be related to the high habitat preference such as types of shores which is rocky, sandy and having lagoon and bay. These can enhance the existence of the seaweeds community due to variations of the habitat. Eighteen seaweed species were selected and screened for the capability presence of fungal endophyte; Sargassum polycystum marked having the highest number of fungal endophyte compared to the other species. These evidence has proved the seaweed have capable of accommodating a lot of species of fungal endophytes. Thus, these evidence leads to positive consequences where further research should be employed.Keywords: diversity, fungal endophyte, macroalgae, screening, seaweed
Procedia PDF Downloads 229942 Conceptualizing Personalized Learning: Review of Literature 2007-2017
Authors: Ruthanne Tobin
Abstract:
As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.Keywords: curriculum change, educational innovation, personalized learning, school reform
Procedia PDF Downloads 223941 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 294940 Self-Assembled ZnFeAl Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Ibuprofen is a non-steroidal anti-inflammatory drug (NSAIDs) and is among the most frequently detected pharmaceuticals in environmental samples and among the most widespread drug in the world. Its concentration in the environment is reported to be between 10 and 160 ng L-1. In order to improve the abatement efficiency of this compound for water source prevention and reclamation, the development of innovative technologies is mandatory. AOPs (advanced oxidation processes) are known as highly efficient towards the oxidation of organic pollutants. Among the promising combined treatments, photo-Fenton processes using layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents the self-supported Fe, Mn or Ti on ZnFeAl LDHs obtained by co-precipitation followed by reconstruction method as novel efficient photo-catalysts for Fenton-like catalysis. Fe, Mn or Ti/ZnFeAl LDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively, by means of a lab-scale system consisting of a batch reactor equipped with an UV lamp (17 W). The present study presents comparatively the degradation of Ibuprofen in aqueous solution UV light irradiation using four different types of LDHs. The newly prepared Ti/ZnFeAl 4:1 catalyst results in the best degradation performance. After 60 minutes of light irradiation, the Ibuprofen removal efficiency reaches 95%. The slowest degradation of Ibuprofen solution occurs in case of Fe/ZnFeAl 4:1 LDH, (67% removal efficiency after 60 minutes of process). Evolution of Ibuprofen degradation during the photo Fenton process is also studied using Ti/ZnFeAl 2:1 and 4:1 LDHs in the presence and absence of H2O2. It is found that after 60 min the use of Ti/ZnFeAl 4:1 LDH in presence of 100 mg/L H2O2 leads to the fastest degradation of Ibuprofen molecule. After 120 min, both catalysts Ti/ZnFeAl 4:1 and 2:1 result in the same value of removal efficiency (98%). In the absence of H2O2, Ibuprofen degradation reaches only 73% removal efficiency after 120 min of degradation process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, advanced oxidation process, micropollutant, heterogeneous Fenton
Procedia PDF Downloads 229939 Identifying and Understand Pragmatic Failures in Portuguese Foreign Language by Chinese Learners in Macau
Authors: Carla Lopes
Abstract:
It is clear nowadays that the proper performance of different speech acts is one of the most difficult obstacles that a foreign language learner has to overcome to be considered communicatively competent. This communication presents the results of an investigation on the pragmatic performance of Portuguese Language students at the University of Macau. The research discussed herein is based on a survey consisting of fourteen speaking situations to which the participants must respond in writing, and that includes different types of speech acts: apology, response to a compliment, refusal, complaint, disagreement and the understanding of the illocutionary force of indirect speech acts. The responses were classified in a five levels Likert scale (quantified from 1 to 5) according to their suitability for the particular situation. In general terms, we can summarize that about 45% of the respondents' answers were pragmatically competent, 10 % were acceptable and 45 % showed weaknesses at socio-pragmatic competence level. Given that the linguistic deviations were not taken into account, we can conclude that the faults are of cultural origin. It is natural that in the presence of orthogonal cultures, such as Chinese and Portuguese, there are failures of this type, barely solved in the four years of the undergraduate program. The target population, native speakers of Cantonese or Mandarin, make their first contact with the English language before joining the Bachelor of Portuguese Language. An analysis of the socio - pragmatic failures in the respondents’ answers suggests the conclusion that many of them are due to the lack of cultural knowledge. They try to compensate for this either using their native culture or resorting to a Western culture that they consider close to the Portuguese, that is the English or US culture, previously studied, and also widely present in the media and on the internet. This phenomenon, known as 'pragmatic transfer', can result in a linguistic behavior that may be considered inauthentic or pragmatically awkward. The resulting speech act is grammatically correct but is not pragmatically feasible, since it is not suitable to the culture of the target language, either because it does not exist or because the conditions of its use are in fact different. Analysis of the responses also supports the conclusion that these students present large deviations from the expected and stereotyped behavior of Chinese students. We can speculate while this linguistic behavior is the consequence of the Macao globalization that culturally casts the students, makes them more open, and distinguishes them from the typical Chinese students.Keywords: Portuguese foreign language, pragmatic failures, pragmatic transfer, pragmatic competence
Procedia PDF Downloads 210938 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain
Authors: Yang Liu, Dongliang Lyu
Abstract:
After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck
Procedia PDF Downloads 189937 Laser Powder Bed Fusion Awareness for Engineering Students in France and Qatar
Authors: Hiba Naccache, Rima Hleiss
Abstract:
Additive manufacturing AM or 3D printing is one of the pillars of Industry 4.0. Compared to traditional manufacturing, AM provides a prototype before production in order to optimize the design and avoid the stock market and uses strictly necessary material which can be recyclable, for the benefit of leaning towards local production, saving money, time and resources. Different types of AM exist and it has a broad range of applications across several industries like aerospace, automotive, medicine, education and else. The Laser Powder Bed Fusion (LPBF) is a metal AM technique that uses a laser to liquefy metal powder, layer by layer, to build a three-dimensional (3D) object. In industry 4.0 and aligned with the numbers 9 (Industry, Innovation and Infrastructure) and 12 (Responsible Production and Consumption) of the Sustainable Development Goals of the UNESCO 2030 Agenda, the AM’s manufacturers committed to minimizing the environmental impact by being sustainable in every production. The LPBF has several environmental advantages, like reduced waste production, lower energy consumption, and greater flexibility in creating components with lightweight and complex geometries. However, LPBF also have environmental drawbacks, like energy consumption, gas consumption and emissions. It is critical to recognize the environmental impacts of LPBF in order to mitigate them. To increase awareness and promote sustainable practices regarding LPBF, the researchers use the Elaboration Likelihood Model (ELM) theory where people from multiple universities in France and Qatar process information in two ways: peripherally and centrally. The peripheral campaigns use superficial cues to get attention, and the central campaigns provide clear and concise information. The authors created a seminar including a video showing LPBF production and a website with educational resources. The data is collected using questionnaire to test attitude about the public awareness before and after the seminar. The results reflected a great shift on the awareness toward LPBF and its impact on the environment. With no presence of similar research, to our best knowledge, this study will add to the literature on the sustainability of the LPBF production technique.Keywords: additive manufacturing, laser powder bed fusion, elaboration likelihood model theory, sustainable development goals, education-awareness, France, Qatar, specific energy consumption, environmental impact, lightweight components
Procedia PDF Downloads 87936 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises
Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska
Abstract:
Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.Keywords: safety climate, occupational health, civil engineering, productivity
Procedia PDF Downloads 318935 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240934 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine
Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence
Procedia PDF Downloads 212933 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 371