Search results for: Russell George Thompson
140 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period
Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer
Abstract:
Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.Keywords: lymphoedema, management strategies, pregnancy, qualitative
Procedia PDF Downloads 85139 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process
Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan
Abstract:
The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation
Procedia PDF Downloads 263138 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114137 Status of Alien Invasive Trees on the Grassland Plateau in Nyika National Park
Authors: Andrew Kanzunguze, Sopani Sichinga, Paston Simkoko, George Nxumayo, Cosmas, V. B. Dambo
Abstract:
Early detection of plant invasions is a necessary prerequisite for effective invasive plant management in protected areas. This study was conducted to determine the distribution and abundance of alien invasive trees in Nyika National Park (NNP). Data on species' presence and abundance were collected from belt transects (n=31) in a 100 square kilometer area on the central plateau. The data were tested for normality using the Shapiro-Wilk test; Mann-Whitney test was carried out to compare frequencies and abundances between the species, and geographical information systems were used for spatial analyses. Results revealed that Black Wattle (Acacia mearnsii), Mexican Pine (Pinus patula) and Himalayan Raspberry (Rubus ellipticus) were the main alien invasive trees on the plateau. A. mearnsii was localized in the areas where it was first introduced, whereas P. patula and R. ellipticus were spread out beyond original points of introduction. R. ellipticus occurred as dense, extensive (up to 50 meters) thickets on the margins of forest patches and pine stands, whilst P. patula trees were frequent in the valleys, occurring most densely (up to 39 stems per 100 square meters) south-west of Chelinda camp on the central plateau with high variation in tree heights. Additionally, there were no significant differences in abundance between R. ellipticus (48) and P. patula (48) in the study area (p > 0.05) It was concluded that R. ellipticus and P. patula require more attention as compared to A. mearnsii. Howbeit, further studies into the invasion ecology of both P. patula and R. ellipticus on the Nyika plateau are highly recommended so as to assess the threat posed by the species on biodiversity, and recommend appropriate conservation measures in the national park.Keywords: alien-invasive trees, Himalayan raspberry, Nyika National Park, Mexican pine
Procedia PDF Downloads 204136 Advanced Exergetic Analysis: Decomposition Method Applied to a Membrane-Based Hard Coal Oxyfuel Power Plant
Authors: Renzo Castillo, George Tsatsaronis
Abstract:
High-temperature ceramic membranes for air separation represents an important option to reduce the significant efficiency drops incurred in state-of-the-art cryogenic air separation for high tonnage oxygen production required in oxyfuel power stations. This study is focused on the thermodynamic analysis of two power plant model designs: the state-of-the-art supercritical 600ᵒC hard coal plant (reference power plant Nordrhein-Westfalen) and the membrane-based oxyfuel concept implemented in this reference plant. In the latter case, the oxygen is separated through a mixed-conducting hollow fiber perovskite membrane unit in the three-end operation mode, which has been simulated under vacuum conditions on the permeate side and at high-pressure conditions on the feed side. The thermodynamic performance of each plant concept is assessed by conventional exergetic analysis, which determines location, magnitude and sources of efficiency losses, and advanced exergetic analysis, where endogenous/exogenous and avoidable/unavoidable parts of exergy destruction are calculated at the component and full process level. These calculations identify thermodynamic interdependencies among components and reveal the real potential for efficiency improvements. The endogenous and exogenous exergy destruction portions are calculated by the decomposition method, a recently developed straightforward methodology, which is suitable for complex power stations with a large number of process components. Lastly, an improvement priority ranking for relevant components, as well as suggested changes in process layouts are presented for both power stations.Keywords: exergy, carbon capture and storage, ceramic membranes, perovskite, oxyfuel combustion
Procedia PDF Downloads 185135 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe
Authors: Stanley Russell
Abstract:
Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.Keywords: modern art, architecture, design methodologies, modern architecture
Procedia PDF Downloads 127134 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79133 The Impact of a Model's Skin Tone and Ethnic Identification on Consumer Decision Making
Authors: Shanika Y. Koreshi
Abstract:
Sri Lanka housed the lingerie product development and manufacturing subsidiary to renowned brands such as La Senza, Marks & Spencer, H&M, Etam, Lane Bryant, and George. Over the last few years, they have produced local brands such as Amante to cater to the local and regional customers. Past research has identified factors such as quality, price, and design to be vital when marketing lingerie to consumers. However, there has been minimum research that looks into the ethnically targeted market and skin colour within the Asian population. Therefore, the main aim of the research was to identify whether consumer preference for lingerie is influenced by the skin tone of the model wearing it. Moreover, the secondary aim was to investigate if the consumer preference for lingerie is influenced by the consumer’s ethnic identification with the skin tone of the model. An experimental design was used to explore the above aims. The participants constituted of 66 females residing in the western province of Sri Lanka and were gathered via convenience sampling. Six computerized images of a real model were used in the study, and her skin tone was digitally manipulated to express three different skin tones (light, tan and dark). Consumer preferences were measured through a ranking order scale that was constructed via a focus group discussion and ethnic identity was measured by the Multigroup Ethnic Identity Measure-Revised. Wilcoxon signed-rank test, Friedman test, and chi square test of independence were carried out using SPSS version 20. The results indicated that majority of the consumers ethnically identified and preferred the tan skin over the light and dark skin tones. The findings support the existing literature that states there is a preference among consumers when models have a medium skin tone over a lighter skin tone. The preference for a tan skin tone in a model is consistent with the ethnic identification of the Sri Lankan sample. The study implies that lingerie brands should consider the model's skin tones when marketing the brand to different ethnic backgrounds.Keywords: consumer preference, ethnic identification, lingerie, skin tone
Procedia PDF Downloads 259132 The Mediating Role of Artificial Intelligence (AI) Driven Customer Experience in the Relationship Between AI Voice Assistants and Brand Usage Continuance
Authors: George Cudjoe Agbemabiese, John Paul Kosiba, Michael Boadi Nyamekye, Vanessa Narkie Tetteh, Caleb Nunoo, Mohammed Muniru Husseini
Abstract:
The smartphone industry continues to experience massive growth, evidenced by expanding markets and an increasing number of brands, models and manufacturers. As technology advances rapidly, manufacturers of smartphones are consistently introducing new innovations to keep up with the latest evolving industry trends and customer demand for more modern devices. This study aimed to assess the influence of artificial intelligence (AI) voice assistant (VA) on improving customer experience, resulting in the continuous use of mobile brands. Specifically, this article assesses the role of hedonic, utilitarian, and social benefits provided by AIVA on customer experience and the continuance intention to use mobile phone brands. Using a primary data collection instrument, the quantitative approach was adopted to examine the study's variables. Data from 348 valid responses were used for the analysis based on structural equation modeling (SEM) with AMOS version 23. Three main factors were identified to influence customer experience, which results in continuous usage of mobile phone brands. These factors are social benefits, hedonic benefits, and utilitarian benefits. In conclusion, a significant and positive relationship exists between the factors influencing customer experience for continuous usage of mobile phone brands. The study concludes that mobile brands that invest in delivering positive user experiences are in a better position to improve usage and increase preference for their brands. The study recommends that mobile brands consider and research their prospects' and customers' social, hedonic, and utilitarian needs to provide them with desired products and experiences.Keywords: artificial intelligence, continuance usage, customer experience, smartphone industry
Procedia PDF Downloads 80131 Prevalence of Oral Tori in Malaysia: A Teaching Hospital Based Cross Sectional Study
Authors: Preethy Mary Donald, Renjith George
Abstract:
Oral tori are localized non-neoplastic protuberances of maxilla and mandible. Torus palatinus (TP) is found on the midline of the roof of mouth existing as single growth or in clusters. Torus mandibularis(TM) is located on the lingual aspect of the mandible commonly between canine and premolar region. Etiology of their presence was not clear and was found to be multifactorial. Their variations in relation to age, gender, ethnicity and also the characteristics of TP and TM have become the interest of multiple studies. The objectives of this study were to determine the prevalence of torus palatinus (TP) and torus mandibularis (TM) among patients who have visited outpatient department, Faculty of Dentistry, Melaka Manipal Medical College. 108 patients were examined for the presence of oral tori at the outpatient department, Faculty of Dentistry, Melaka-Manipal Medical College. Factors such as age, gender, ethnicity of the patients and size, shape, location of the oral tori were studied. For TP, Malays (62.96%) have been found to have the highest prevalence than Chinese (43.3%) and Indians (35.71%). For TM, Chinese (7.46%) had predominated compared to Malays (7.41%) and Indians (0%). There is no significant association between occurrence of TP and TM with age, gender and ethnicity. For Torus palatinus, the most common size was Grade 1(1-3mm), most common location was molar region, and the most common shape was spindle. For Torus mandibularis, the most frequent location was canine premolar region and exists in unilateral single or bilateral single fashion. The overall prevalence rates were 47.2% for TP and 6.48% for TM. However, there is no significant association between occurrence of TP and TM with age, gender and ethnicity. The results showed variations in clinical characteristics and support the findings that occurrence of tori is a dynamic phenomenon which is multifactorial owing to the environmental factors such as stress from occlusion and dietary habits. It could be due to the genetic make-up of the individual.Keywords: torus palatinus, torus mandibularis, age, gender
Procedia PDF Downloads 279130 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 89129 Exhaust Gas Cleaning Systems on Board Ships and Impact on Crews’ Health: A Feasibility Study Protocol
Authors: Despoina Andrioti Bygvraa, Ida-Maja Hassellöv, George Charalambous
Abstract:
Exhaust gas cleaning systems, also known as scrubbers, are today widely used to allow for the use of High Sulphur Heavy Fuel Oil and still comply with the regulations limiting sulphur content in marine fuels. There are extensive concerns about environmental consequences, especially in the Baltic Sea, from the wide-scale use of scrubbers, as the wash water is acidic (ca pH 3) and contains high concentrations of toxic, carcinogenic, and mutagenic substances. The aim of this feasibility study is to investigate the potential adverse effects on seafarers’ health with the ultimate goal of raising awareness of chemical-related health and safety issues in the shipping environment. The project got funding from the Swedish Foundation. The team will extend previously compiled data on scrubber wash water concentrations of hazardous substances and pH to include the use of strong base in closed-loop scrubbers, and scoping assessment on handling and disposing practices. Based on the findings (a), a systematic review of risk assessment will follow to show the risk of exposures, the establishment of the hazardous levels for human health as well as the respective prevention practices. In addition, the researchers will perform (b) a systematic review to identify facilitators and barriers of the crew on compliance with the safe handling of chemicals. The study will run for 12 months, delivering (a) a risk assessment inventory with risk exposures and (b) a course description of safe handling practices. This feasibility study could provide valuable knowledge on how pollutants found in scrubbers should be considered from a human health perspective to facilitate evidence-based informed decisions in future technology- and policy development to make shipping a safer, healthier, and more attractive workplace.Keywords: health and safety, seafarers, scrubbers, chemicals, risk exposures
Procedia PDF Downloads 58128 Algae Growth and Biofilm Control by Ultrasonic Technology
Authors: Vojtech Stejskal, Hana Skalova, Petr Kvapil, George Hutchinson
Abstract:
Algae growth has been an important issue in water management of water plants, ponds and lakes, swimming pools, aquaculture & fish farms, gardens or golf courses for last decades. There are solutions based on chemical or biological principles. Apart of these traditional principles for inhibition of algae growth and biofilm production there are also physical methods which are very competitive compared to the traditional ones. Ultrasonic technology is one of these alternatives. Ultrasonic emitter is able to eliminate the biofilm which behaves as a host and attachment point for algae and is original reason for the algae growth. The ultrasound waves prevent majority of the bacteria in planktonic form becoming strongly attached sessile bacteria that creates welcoming layer for the biofilm production. Biofilm creation is very fast – in the serene water it takes between 30 minutes to 4 hours, depending on temperature and other parameters. Ultrasound device is not killing bacteria. Ultrasound waves are passing through bacteria, which retract as if they were in very turbulent water even though the water is visually completely serene. In these conditions, bacteria does not excrete the polysaccharide glue they use to attach to the surface of the pool or pond, where ultrasonic technology is used. Ultrasonic waves decrease the production of biofilm on the surfaces in the selected area. In case there are already at the start of the application of ultrasonic technology in a pond or basin clean inner surfaces, the biofilm production is almost absolutely inhibited. This paper talks about two different pilot applications – one in Czech Republic and second in United States of America, where the used ultrasonic technology (AlgaeControl) is coming from. On both sites, there was used Mezzo Ultrasonic Algae Control System with very positive results not only on biofilm production, but also algae growth in the surrounding area. Technology has been successfully tested in two different environments. The poster describes the differences and their influence on the efficiency of ultrasonic technology application. Conclusions and lessons learned can be possibly applied also on other sites within Europe or even further.Keywords: algae growth, biofilm production, ultrasonic solution, ultrasound
Procedia PDF Downloads 269127 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics
Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman
Abstract:
Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning
Procedia PDF Downloads 170126 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139125 Phytoremediation-A Plant Based Cleansing Method to Obtain Quality Medicinal Plants and Natural Products
Authors: Hannah S. Elizabeth, D. Gnanasekaran, M. R. Manju Gowda, Antony George
Abstract:
Phytoremediation a new technology of remediating the contaminated soil, water and air using plants and serves as a green technology with environmental friendly approach. The main aim of this technique is cleansing and detoxifying of organic compounds, organo-phosphorous pesticides, heavy metals like arsenic, iron, cadmium, gold, radioactive elements which cause teratogenic and life threatening diseases to mankind and animal kingdom when consume the food crops, vegetables, fruits, cerals, and millets obtained from the contaminated soil. Also, directly they may damage the genetic materials thereby alters the biosynthetic pathways of secondary metabolites and other phytoconstituents which may have different pharmacological activities which lead to lost their efficacy and potency as well. It would reflect in mutagenicity, drug resistance and affect other antagonistic properties of normal metabolism. Is the technology for real clean-up of contaminated soils and the contaminants which are potentially toxic. It reduces the risks produced by a contaminated soil by decreasing contaminants using plants as a source. The advantages are cost-effectiveness and less ecosystem disruption. Plants may also help to stabilize contaminants by accumulating and precipitating toxic trace elements in the roots. Organic pollutants can potentially be chemically degraded and ultimately mineralized into harmless biological compounds. Hence, the use of plants to revitalize contaminated sites is gaining more attention and preferred for its cost-effective when compared to other chemical methods. The introduction of harmful substances into the environment has been shown to have many adverse effects on human health, agricultural productivity, and natural ecosystems. Because the costs of growing a crop are minimal compared to those of soil removal and replacement, the use of plants to remediate hazardous soils is seen as having great promise.Keywords: cost effective, eco-friendly, phytoremediation, secondary metabolites
Procedia PDF Downloads 281124 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2
Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle
Abstract:
With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis
Procedia PDF Downloads 72123 Large-Scale Screening for Membrane Protein Interactions Involved in Platelet-Monocyte Interactions
Authors: Yi Sun, George Ed Rainger, Steve P. Watson
Abstract:
Background: Beyond the classical roles in haemostasis and thrombosis, platelets are important in the initiation and development of various thrombo-inflammatory diseases. In atherosclerosis and deep vein thrombosis, for example, platelets bridge monocytes with endothelium and form heterotypic aggregates with monocytes in the circulation. This can alter monocyte phenotype by inducing their activation, stimulating adhesion and migration. These interactions involve cell surface receptor-ligand pairs on both cells. This list is likely incomplete as new interactions of importance to platelet biology are continuing to be discovered as illustrated by our discovery of PEAR-1 binding to FcεR1α. Results: We have developed a highly sensitive avidity-based assay to identify novel extracellular interactions among 126 recombinantly-expressed platelet cell surface and secreted proteins involved in platelet aggregation. In this study, we will use this method to identify novel platelet-monocyte interactions. We aim to identify ligands for orphan receptors and novel partners of well-known proteins. Identified interactions will be studied in preliminary functional assays to demonstrate relevance to the inflammatory processes supporting atherogenesis. Conclusions: Platelet-monocyte interactions are essential for the development of thromboinflammatory disease. Up until relatively recently, technologies only allow us to limit our studies on each individual protein interaction at a single time. These studies propose for the first time to study the cell surface platelet-monocyte interactions in a systematic large-scale approach using a reliable screening method we have developed. If successful, this will likely to identify previously unknown ligands for important receptors that will be investigated in details and also provide a list of novel interactions for the field. This should stimulate studies on developing alternative therapeutic strategies to treat vascular inflammatory disorders such as atherosclerosis, DVT and sepsis and other clinically important inflammatory conditions.Keywords: membrane proteins, large-scale screening, platelets, recombinant expression
Procedia PDF Downloads 151122 A Comparison of Two and Three Dimensional Motion Capture Methodologies in the Analysis of Underwater Fly Kicking Kinematics
Authors: Isobel M. Thompson, Dorian Audot, Dominic Hudson, Martin Warner, Joseph Banks
Abstract:
Underwater fly kick is an essential skill in swimming, which can have a considerable impact upon overall race performance in competition, especially in sprint events. Reduced wave drags acting upon the body under the surface means that the underwater fly kick will potentially be the fastest the swimmer is travelling throughout the race. It is therefore critical to understand fly kicking techniques and determining biomechanical factors involved in the performance. Most previous studies assessing fly kick kinematics have focused on two-dimensional analysis; therefore, the three-dimensional elements of the underwater fly kick techniques are not well understood. Those studies that have investigated fly kicking techniques using three-dimensional methodologies have not reported full three-dimensional kinematics for the techniques observed, choosing to focus on one or two joints. There has not been a direct comparison completed on the results obtained using two-dimensional and three-dimensional analysis, and how these different approaches might affect the interpretation of subsequent results. The aim of this research is to quantify the differences in kinematics observed in underwater fly kicks obtained from both two and three-dimensional analyses of the same test conditions. In order to achieve this, a six-camera underwater Qualisys system was used to develop an experimental methodology suitable for assessing the kinematics of swimmer’s starts and turns. The cameras, capturing at a frequency of 100Hz, were arranged along the side of the pool spaced equally up to 20m creating a capture volume of 7m x 2m x 1.5m. Within the measurement volume, error levels were estimated at 0.8%. Prior to pool trials, participants completed a landside calibration in order to define joint center locations, as certain markers became occluded once the swimmer assumed the underwater fly kick position in the pool. Thirty-four reflective markers were placed on key anatomical landmarks, 9 of which were then removed for the pool-based trials. The fly-kick swimming conditions included in the analysis are as follows: maximum effort prone, 100m pace prone, 200m pace prone, 400m pace prone, and maximum pace supine. All trials were completed from a push start to 15m to ensure consistent kick cycles were captured. Both two-dimensional and three-dimensional kinematics are calculated from joint locations, and the results are compared. Key variables reported include kick frequency and kick amplitude, as well as full angular kinematics of the lower body. Key differences in these variables obtained from two-dimensional and three-dimensional analysis are identified. Internal rotation (up to 15º) and external rotation (up to -28º) were observed using three-dimensional methods. Abduction (5º) and adduction (15º) were also reported. These motions are not observed in the two-dimensional analysis. Results also give an indication of different techniques adopted by swimmers at various paces and orientations. The results of this research provide evidence of the strengths of both two dimensional and three dimensional motion capture methods in underwater fly kick, highlighting limitations which could affect the interpretation of results from both methods.Keywords: swimming, underwater fly kick, performance, motion capture
Procedia PDF Downloads 133121 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations
Authors: Alex Fedoseyev
Abstract:
This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel
Procedia PDF Downloads 65120 The Retinoprotective Effects and Mechanisms of Fungal Ingredient 3,4-Dihydroxybenzalacetone through Inhibition of Retinal Müller and Microglial Activation
Authors: Yu-Wen Cheng, Jau-Der Ho, Liang-Huan Wu, Fan-Li Lin, Li-Huei Chen, Hung-Ming Chang, Yueh-Hsiung Kuo, George Hsiao
Abstract:
Retina glial activation and neuroinflammation have been confirmed to cause devastating responses in retinodegenerative diseases. The expression and activation of matrix metalloproteinase (MMP)-9 and inducible nitric oxide synthase (iNOS) could be exerted as the crucial pathological factors in glaucoma- and blue light-induced retinal injuries. The present study aimed to investigate the retinoprotective effects and mechanisms of fungal ingredient 3,4-dihydroxybenzalacetone (DBL) isolated from Phellinus linteus in the retinal glial activation and retinodegenerative animal models. According to the cellular studies, DBL significantly and concentration-dependently abrogated MMP-9 activation and expression in TNFα-stimulated retinal Müller (rMC-1) cells. We found the inhibitory activities of DBL were strongly through the STAT- and ERK-dependent pathways. Furthermore, DBL dramatically attenuated MMP-9 activation in the stimulated Müller cells exposed to conditioned media from LPS-stimulated microglia BV-2 cells. On the other hand, DBL strongly suppressed LPS-induced production of NO and ROS and expression of iNOS in microglia BV-2 cells. Consistently, the phosphorylation of STAT was substantially blocked by DBL in LPS-stimulated microglia BV-2 cells. In the evaluation of retinoprotective functions, the high IOP-induced scotopic electroretinographic (ERG) deficit and blue light-induced abnormal pupillary light response (PLR) were assessed. The deficit scotopic ERG responses markedly recovered by DBL in a rat model of glaucoma-like ischemia/reperfusion (I/R)-injury. DBL also reduced the aqueous gelatinolytic activity and retinal MMP-9 expression in high IOP-injured conditions. Additionally, DBL could restore the abnormal PLR and reduce retinal MMP-9 activation. In summary, DBL could ameliorate retinal neuroinflammation and MMP-9 activation by predominantly inhibiting STAT3 activation in the retinal Müller cells and microglia, which exhibits therapeutic potential for glaucoma and other retinal degenerative diseases.Keywords: glaucoma, blue light, DBL, retinal Müller cell, MMP-9, STAT, Microglia, iNOS, ERG, PLR
Procedia PDF Downloads 139119 A Step Towards Circular Economy: Assessing the Efficacy of Ion Exchange Resins in the Recycling of Automotive Engine Coolants
Authors: George Madalin Danila, Mihaiella Cretu, Cristian Puscasu
Abstract:
The recycling of used antifreeze/coolant is a widely discussed and intricate issue. Complying with government regulations for the proper disposal of hazardous waste poses a significant challenge for today's automotive and industrial industries. In recent years, global focus has shifted toward Earth's fragile ecology, emphasizing the need to restore and preserve the natural environment. The business and industrial sectors have undergone substantial changes to adapt and offer products tailored to these evolving markets. The global antifreeze market size was evaluated at US 5.4 billion in 2020 to reach USD 5,9 billion by 2025 due to the increased number of vehicles worldwide, but also to the growth of HVAC systems. This study presents the evaluation of an ion exchange resin-based installation designed for the recycling of engine coolants, specifically ethylene glycol (EG) and propylene glycol (PG). The recycling process aims to restore the coolant to meet the stringent ASTM standards for both new and recycled coolants. A combination of physical-chemical methods, gas chromatography-mass spectrometry (GC-MS), and inductively coupled plasma mass spectrometry (ICP-MS) was employed to analyze and validate the purity and performance of the recycled product. The experimental setup included performance tests, namely corrosion to glassware and the tendency to foaming of coolant, to assess the efficacy of the recycled coolants in comparison to new coolant standards. The results demonstrate that the recycled EG coolants exhibit comparable quality to new coolants, with all critical parameters falling within the acceptable ASTM limits. This indicates that the ion exchange resin method is a viable and efficient solution for the recycling of engine coolants, offering an environmentally friendly alternative to the disposal of used coolants while ensuring compliance with industry standards.Keywords: engine coolant, glycols, recycling, ion exchange resin, circular economy
Procedia PDF Downloads 42118 Flotation of Rare Earth Oxides from Iron-Oxide Silicate Rich Tailings Using Fatty Acids
Authors: George B. Abaka-Wood, Massimiliano Zanin, Jonas Addai-Mensah, William Skinner
Abstract:
The versatility of froth flotation has made it vital in the beneficiation of rare earth elements minerals from either high or low-grade ores. There has been a significant increase in the quantity of iron oxide silicate-rich tailings generated from the extraction of primary commodities such as copper and gold in Australia, which have been identified to contain very low-grade rare earth oxides (≤ 1%). There is a vast knowledge gap in the beneficiation of rare earth oxides from such tailings. The aim of this research is to investigate the feasibility of using fatty acids as collectors for the flotation recovery and upgrade of rare earth oxides from selected iron-oxide silicate-rich tailings. Two forms of fatty acid collectors (oleic acid and sodium oleate) were tested in this investigation. Flotation tests were carried out using a 1.2 L Denver D-12 cell. The effects of pulp pH, fatty acid dosage, particle size distribution (-150 +75 µm, -75 +38 µm and -38 µm) and conventional depressants (sodium silicate and starch) dosage on flotation recovery of rare earth oxides were investigated. A comparison of the flotation results indicated that sodium oleate was the more efficient fatty acid for rare earth oxides flotation at all the pulp pH investigated. The flotation performance was found to be particle size-dependent. Both sodium silicate and starch were unselective in decreasing the recovery of iron oxides and silicate minerals, respectively with the corresponding decrease in rare earth oxides recovery. Generally, iron oxides and silicate minerals formed the substantial fraction of the flotation concentrates obtained, both in the absence and presence of depressants, resulting in a generally low rare earth oxides upgrade, even though rare earth oxides recoveries were high. The flotation tests carried out on the tailings sample suggest the feasibility of rare earth oxides recovery using fatty acids, although particle size distribution and minerals liberation are key limiting factors in achieving selective rare earth oxides upgrade.Keywords: depressants, flotation, oleic acid, sodium oleate
Procedia PDF Downloads 189117 Bioethanol Production from Marine Algae Ulva Lactuca and Sargassum Swartzii: Saccharification and Process Optimization
Authors: M. Jerold, V. Sivasubramanian, A. George, B.S. Ashik, S. S. Kumar
Abstract:
Bioethanol is a sustainable biofuel that can be used alternative to fossil fuels. Today, third generation (3G) biofuel is gaining more attention than first and second-generation biofuel. The more lignin content in the lignocellulosic biomass is the major drawback of second generation biofuels. Algae are the renewable feedstock used in the third generation biofuel production. Algae contain a large number of carbohydrates, therefore it can be used for the fermentation by hydrolysis process. There are two groups of Algae, such as micro and macroalgae. In the present investigation, Macroalgae was chosen as raw material for the production of bioethanol. Two marine algae viz. Ulva Lactuca and Sargassum swartzii were used for the experimental studies. The algal biomass was characterized using various analytical techniques like Elemental Analysis, Scanning Electron Microscopy Analysis and Fourier Transform Infrared Spectroscopy to understand the physio-Chemical characteristics. The batch experiment was done to study the hydrolysis and operation parameters such as pH, agitation, fermentation time, inoculum size. The saccharification was done with acid and alkali treatment. The experimental results showed that NaOH treatment was shown to enhance the bioethanol. From the hydrolysis study, it was found that 0.5 M Alkali treatment would serve as optimum concentration for the saccharification of polysaccharide sugar to monomeric sugar. The maximum yield of bioethanol was attained at a fermentation time of 9 days. The inoculum volume of 1mL was found to be lowest for the ethanol fermentation. The agitation studies show that the fermentation was higher during the process. The percentage yield of bioethanol was found to be 22.752% and 14.23 %. The elemental analysis showed that S. swartzii contains a higher carbon source. The results confirmed hydrolysis was not completed to recover the sugar from biomass. The specific gravity of ethanol was found to 0.8047 and 0.808 for Ulva Lactuca and Sargassum swartzii, respectively. The purity of bioethanol also studied and found to be 92.55 %. Therefore, marine algae can be used as a most promising renewable feedstock for the production of bioethanol.Keywords: algae, biomass, bioethaol, biofuel, pretreatment
Procedia PDF Downloads 159116 Cognitivism in Classical Japanese Art and Literature: The Cognitive Value of Haiku and Zen Painting
Authors: Benito Garcia-Valero
Abstract:
This paper analyses the cognitivist value of traditional Japanese theories about aesthetics, art, and literature. These reflections were developed several centuries before actual Cognitive Studies, which started in the seventies of the last century. A comparative methodology is employed to shed light on the similarities between traditional Japanese conceptions about art and current cognitivist principles. The Japanese texts to be compared are Zeami’s treatise on noh art, Okura Toraaki’s Waranbe-gusa on kabuki theatre, and several Buddhist canonical texts about wisdom and knowledge, like the Prajnaparamitahrdaya or Heart Sutra. Japanese contemporary critical sources on these works are also referred, like Nishida Kitaro’s reflections on Zen painting or Ichikawa Hiroshi’s analysis of body/mind dualism in Japanese physical practices. Their ideas are compared with cognitivist authors like George Lakoff, Mark Johnson, Mark Turner and Margaret Freeman. This comparative review reveals the anticipatory ideas of Japanese thinking on body/mind interrelationship, which agrees with cognitivist criticism against dualism, since both elucidate the physical grounds acting upon the formation of concepts and schemes during the production of knowledge. It also highlights the necessity of recovering ancient Japanese treatises on cognition to continue enlightening current research on art and literature. The artistic examples used to illustrate the theory are Sesshu’s Zen paintings and Basho’s classical haiku poetry. Zen painting is an excellent field to demonstrate how monk artists conceived human perception and guessed the active role of beholders during the contemplation of art. On the other hand, some haikus by Matsuo Basho aim at factoring subjectivity out from artistic praxis, which constitutes an ideal of illumination that cannot be achieved using art, due to the embodied nature of perception; a constraint consciously explored by the poet himself. These ideas consolidate the conclusions drawn today by cognitivism about the interrelation between subject and object and the concept of intersubjectivity.Keywords: cognitivism, dualism, haiku, Zen painting
Procedia PDF Downloads 143115 Representations of Race and Social Movement Strategies in the US
Authors: Lee Artz
Abstract:
Based on content analyses of major US media, immediately following the George Floyd killing in May 2020, some mayors and local, state, and national officials offered favorable representations of protests against police violence. As the protest movement grew to historic proportions with 26 million joining actions in large cities and small towns, dominant representations of racism by elected officials and leading media shifted—replacing both the voices and demands of protestors with representations by elected officials. Major media quoted Black mayors and Congressional representatives who emphasized concerns about looting and the disruption of public safety. Media coverage privileged elected officials who criticized movement demands for defunding police and deplored isolated instances of property damaged by protestors. Subsequently, public opinion polls saw an increase in concern for law and order tropes and a decrease in support for protests against police violence. Black Lives Matter and local organizations had no coordinated response and no effective means of communication to counter dominant representations voiced by politicians and globally disseminated by major media. Politician and media-instigated public opinion shifts indicate that social movements need their own means of communication and collective decision-making--both of which were largely missing from Black Lives Matter leaders, leading to disaffection and a political split by more than 20 local affiliates. By itself, social media by myriad individuals and groups had limited purchase as a means for social movement communication and organization. Lacking a collaborative, coordinated strategy, organization, and independent media, the loose network of Black Lives Matter groups was unable to offer more accurate, democratic, and favorable representations of protests and their demands for more justice and equality. The fight for equality was diverted by the fight for representation.Keywords: black lives matter, public opinion, racism, representations, social movements
Procedia PDF Downloads 179114 Effectiveness of an Attachment-Based Intervention on Child Cognitive Development: Preliminary Analyses of a 12-Month Follow-Up
Authors: Claire Baudry, Jessica Pearson, Laura-Emilie Savage, George Tarbulsy
Abstract:
Introduction: Over the last decade, researchers have implemented attachment-based interventions to promote parental interactive sensitivity and child development among vulnerable families. In the context of the present study, these interventions have been shown to be effective to enhance cognitive development when child outcome was measured shortly after the intervention. Objectives: The goal of the study was to investigate the effects of an attachment-based intervention on child cognitive development one year post-intervention. Methods: Thirty-five mother-child dyads referred by Child Protective Services in the province of Québec, Canada, were included in this study: 21 dyads who received 6 to 8 intervention sessions and 14 dyads not exposed to the intervention and matched for the following variables: duration of child protective services, reason for involvement with child protection, age, sex and family status. Child cognitive development was measured using the WPPSI-IV, 12 months after the end of the intervention when the average age of children was 54 months old. Findings: An independent-samples t-test was conducted to compare the scores obtained on the WPPSI-IV for the two groups. In general, no differences were observed between the two groups. There was a significant difference on the fluid reasoning scale between children exposed to the intervention (M = 95,13, SD = 16,67) and children not exposed (M = 81, SD = 9,90). T (23) = -2,657; p= .014 (IC :-25.13;3.12). This difference was found only for children aged between 48 and 92 months old. Other results did not show any significant difference between the two groups (Global IQ or subscales). Conclusions: This first set of analyses suggest that relatively little effects of attachment-based intervention remain on the level of cognitive functioning 12-months post-intervention. It is possible that the significant findings concerning fluid reasoning may be pertinent in that fluid reasoning is linked to the capacity to analyse, to solve problems, and remember information, which may be important for promoting school readiness. As the study is completed and as more information is gained from other assessments of cognitive and socioemotional outcome, a clearer picture of the potential moderate-term impact of attachment-based intervention will emerge.Keywords: attachment-based intervention, child development, child protective services, cognitive development
Procedia PDF Downloads 173113 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management
Authors: A. Giannakopoulos, S. B. Buckley
Abstract:
Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.Keywords: critical thinking, knowledge management, mathematics, problem solving
Procedia PDF Downloads 596112 Central Vascular Function and Relaxibility in Beta-thalassemia Major Patients vs. Sickle Cell Anemia Patients by Abdominal Aorta and Aortic Root Speckle Tracking Echocardiography
Authors: Gehan Hussein, Hala Agha, Rasha Abdelraof, Marina George, Antoine Fakhri
Abstract:
Background: β-Thalassemia major (TM) and sickle cell disease (SCD) are inherited hemoglobin disorders resulting in chronic hemolytic anemia. Cardiovascular involvement is an important cause of morbidity and mortality in these groups of patients. The narrow border is between overt myocardial dysfunction and clinically silent left ventricular (LV) and / or right ventricular (RV) dysfunction in those patients. 3 D Speckle tracking echocardiography (3D STE) is a novel method for the detection of subclinical myocardial involvement. We aimed to study myocardial affection in SCD and TM using 3D STE, comparing it with conventional echocardiography, correlate it with serum ferritin level and lactate dehydrogenase (LDH). Methodology: Thirty SCD and thirty β TM patients, age range 4-18 years, were compared to 30 healthy age and sex matched control group. Cases were subjected to clinical examination, laboratory measurement of hemoglobin level, serum ferritin, and LDH. Transthoracic color Doppler echocardiography, 3D STE, tissue Doppler echocardiography, and aortic speckle tracking were performed. Results: significant reduction in global longitudinal strain (GLS), global circumferential strain (GCS), and global area strain (GAS) in SCD and TM than control (P value <0.001) there was significantly lower aortic speckle tracking in patients with TM and SCD than control (P value< 0.001). LDH was significantly higher in SCD than both TM and control and it correlated significantly positive mitral inflow E, (p value:0.022 and 0.072. r: 0.416 and -0.333 respectively) lateral E/E’ (p value.<0.001and 0.818. r. 0.618 and -0. 044.respectively) and septal E/E’ (p value 0.007 and 0.753& r value 0.485 and -0.060 respectively) in SCD but not TM and significant negative correlation between LDH and aortic root speckle tracking (value 0.681& r. -0.078.). The potential diagnostic accuracy of LDH in predicting vascular dysfunction as represented by aortic root GCS with a sensitivity 74% and aortic root GCS was predictive of LV dysfunction in SCD patients with sensitivity 100% Conclusion: 3D STE LV and RV systolic dysfunction in spite of their normal values by conventional echocardiography. SCD showed significantly lower right ventricular dysfunction and aortic root GCS than TM and control. LDH can be used to screen patients for cardiac dysfunction in SCD, not in TMKeywords: thalassemia major, sickle cell disease, 3d speckle tracking echocardiography, LDH
Procedia PDF Downloads 170111 Citizen Becoming: ‘In-between’ State and Tibetan Self-Fashioning (1946- 1986)
Authors: Noel Mariam George
Abstract:
This paper explores the history of Tibetan citizenship, one of the primary non-partition refugee communities, and their negotiation of 'in-betweenness' as a mode of political and legal belonging in India. While South Asian citizenship histories have primarily centered around the 1947 and 1971 Partitions, this paper uncovers an often-overlooked period, spanning the 1950s, 60s, and 70s, when Tibetans began to assert their claims within the Indian state. This paper challenges the conventional teleological narrative of partition by highlighting a distinct period when the Indian state negotiated boundaries of belonging for non-partition refugees differently. It explores how Tibetans occupied an 'in-between' status, existing as both foreigners and potential citizens, thereby complicating the traditional citizen-refugee binary. Moreover, it underscores that citizenship during this era was not solely determined by legal frameworks. Instead, it was a dynamic process shaped by historical contexts, practices, and relationships. Tibetans pursued citizen-like claims through legal battles, lobbying, protests, volunteering, and collective solidarity, revealing citizenship as an 'act' embedded in their daily lives. Tibetan liminality is characterized by their simultaneous maintenance of exile identity and pursuit of citizen-like claims in India. The cautious Indian state, reluctant to label Tibetans as either 'refugees' or 'citizens,' has contributed to this liminal status. This duality has intensified Tibetans' precarity but has also led to creative and transformative practices that have expanded the boundaries of democracy and citizenship in India. Beyond traditional narratives of Indian benevolence, this paper scrutinizes the geopolitical factors driving Indian support for Tibetans. Additionally, it challenges 'common-sensical' narratives by demonstrating how Tibetans strategically navigated Indian citizenship. Using archival sources from the British Library and the National Archives in London and Delhi along with digitized materials, the paper reveals citizenship as a multi-faceted historical process. It examines how Tibetans exercised agency within the Indian state despite their liminal status.Keywords: citizenship, borderlands, forced displacement, refugees in India
Procedia PDF Downloads 76