Search results for: spatial patterns
89 Intercultural Initiatives and Canadian Bilingualism
Authors: Muna Shafiq
Abstract:
Growth in international immigration is a reflection of increased migration patterns in Canada and in other parts of the world. Canada continues to promote itself as a bilingual country, yet the bilingual French and English population numbers do not reflect this platform. Each province’s integration policies focus only on second language learning of either English or French. Moreover, since English Canadians outnumber French Canadians, maintaining, much less increasing, English-French bilingualism appears unrealistic. One solution to increasing Canadian bilingualism requires creating intercultural communication initiatives between youth in Quebec and the rest of Canada. Specifically, the focus is on active, experiential learning, where intercultural competencies develop outside traditional classroom settings. The target groups are Generation Y Millennials and Generation Z Linksters, the next generations in the career and parenthood lines. Today, Canada’s education system, like many others, must continually renegotiate lines between programs it offers its immigrant and native communities. While some purists or right-wing nationalists would disagree, the survival of bilingualism in Canada has little to do with reducing immigration. Children and youth immigrants play a valuable role in increasing Canada’s French and English speaking communities. For instance, a focus on more immersion, over core French education programs for immigrant children and youth would not only increase bilingual rates; it would develop meaningful intercultural attachments between Canadians. Moreover, a vigilant increase of funding in French immersion programs is critical, as are new initiatives that focus on experiential language learning for students in French and English language programs. A favorable argument supports the premise that other than French-speaking students in Québec and elsewhere in Canada, second and third generation immigrant students are excellent ambassadors to promote bilingualism in Canada. Most already speak another language at home and understand the value of speaking more than one language in their adopted communities. Their dialogue and participation in experiential language exchange workshops are necessary. If the proposed exchanges take place inter-provincially, the momentum to increase collective regional voices increases. This regional collectivity can unite Canadians differently than nation-targeted initiatives. The results from an experiential youth exchange organized in 2017 between students at the crossroads of Generation Y and Generation Z in Vancouver and Quebec City respectively offer a promising starting point in assessing the strength of bringing together different regional voices to promote bilingualism. Code-switching between standard, international French Vancouver students, learn in the classroom versus more regional forms of Quebec French spoken locally created regional connectivity between students. The exchange was equally rewarding for both groups. Increasing their appreciation for each other’s regional differences allowed them to contribute actively to their social and emotional development. Within a sociolinguistic frame, this proposed model of experiential learning does not focus on hands-on work experience. However, the benefits of such exchanges are as valuable as work experience initiatives developed in experiential education. Students who actively code switch between French and English in real, not simulated contexts appreciate bilingualism more meaningfully and experience its value in concrete terms.Keywords: experiential learning, intercultural communication, social and emotional learning, sociolinguistic code-switching
Procedia PDF Downloads 13888 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming
Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero
Abstract:
Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up
Procedia PDF Downloads 24487 A Postmodern Framework for Quranic Hermeneutics
Authors: Christiane Paulus
Abstract:
Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform
Procedia PDF Downloads 21986 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 7085 Experience in Caring for a Patient with Terminal Aortic Dissection of Lung Cancer and Paralysis of the Lower Limbs after Surgery
Authors: Pei-Shan Liang
Abstract:
Objective: This article explores the care experience of a terminal lung cancer patient who developed lower limb paralysis after surgery for aortic dissection. The patient, diagnosed with aortic dissection during chemotherapy for lung cancer, faced post-surgical lower limb paralysis, leading to feelings of helplessness and hopelessness as they approached death with reduced mobility. Methods: The nursing period was from July 19 to July 27, during which the author, alongside the intensive care team and palliative care specialists, conducted a comprehensive assessment through observation, direct care, conversations, physical assessments, and medical record review. Gordon's eleven functional health patterns were used for a holistic evaluation, identifying four nursing health issues: "pain related to terminal lung cancer and invasive procedures," "decreased cardiac tissue perfusion due to hemodynamic instability," "impaired physical mobility related to lower limb paralysis," and "hopelessness due to the unpredictable prognosis of terminal lung cancer." Results: The medical team initially focused on symptom relief, administering Morphine 5mg in 0.9% N/S 50ml IVD q6h for pain management and continuing chemotherapy as prescribed. Open communication was employed to address the patient's physical, psychological, and spiritual concerns. Non-pharmacological interventions, including listening, caring, companionship, opioid medication, and distraction techniques like comfortable positioning and warm foot baths, were used to alleviate pain, reducing the pain score to 3 on the numeric rating scale and easing respiratory discomfort. The palliative care team was also involved, guiding the patient and family through the "Four Paths of Life," helping the patient achieve a good end-of-life experience and the family to experience a peaceful life. This process also served to promote the concept of palliative care, enabling more patients and families to receive high-quality and dignified care. The patient was encouraged to express inner anxiety through drawing or writing, which helped reduce the hopelessness caused by psychological distress and uncertainty about the disease's prognosis, as assessed by the Hospital Anxiety and Depression Scale, reaching a level of mild anxiety but acceptable without affecting sleep. Conclusion: What left a deep impression during the care process was the need for intensive care providers to consider the patient's psychological state, not just their physical condition, when the patient's situation changes. Family support and involvement often provide the greatest solace for the patient, emphasizing the importance of comfort and dignity. This includes oral care to maintain cleanliness and comfort, frequent repositioning to alleviate pressure and discomfort, and timely removal of invasive devices and unnecessary medications to avoid unnecessary suffering. The nursing process should also address the patient's psychological needs, offering comfort and support to ensure that they can face the end of life with peace and dignity.Keywords: intensive care, lung cancer, aortic dissection, lower limb paralysis
Procedia PDF Downloads 2584 Finite Element Method (FEM) Simulation, design and 3D Print of Novel Highly Integrated PV-TEG Device with Improved Solar Energy Harvest Efficiency
Abstract:
Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.Keywords: thermoelectric, finite element method, 3d print, energy conversion
Procedia PDF Downloads 6783 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake
Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel
Abstract:
Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building
Procedia PDF Downloads 15182 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study
Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki
Abstract:
Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxietyKeywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation
Procedia PDF Downloads 15081 User-Controlled Color-Changing Textiles: From Prototype to Mass Production
Authors: Joshua Kaufman, Felix Tan, Morgan Monroe, Ayman Abouraddy
Abstract:
Textiles and clothing have been a staple of human existence for millennia, yet the basic structure and functionality of textile fibers and yarns has remained unchanged. While color and appearance are essential characteristics of a textile, an advancement in the fabrication of yarns that allows for user-controlled dynamic changes to the color or appearance of a garment has been lacking. Touch-activated and photosensitive pigments have been used in textiles, but these technologies are passive and cannot be controlled by the user. The technology described here allows the owner to control both when and in what pattern the fabric color-change takes place. In addition, the manufacturing process is compatible with mass-producing the user-controlled, color-changing yarns. The yarn fabrication utilizes a fiber spinning system that can produce either monofilament or multifilament yarns. For products requiring a more robust fabric (backpacks, purses, upholstery, etc.), larger-diameter monofilament yarns with a coarser weave are suitable. Such yarns are produced using a thread-coater attachment to encapsulate a 38-40 AWG metal wire inside a polymer sheath impregnated with thermochromic pigment. Conversely, products such as shirts and pants requiring yarns that are more flexible and soft against the skin comprise multifilament yarns of much smaller-diameter individual fibers. Embedding a metal wire in a multifilament fiber spinning process has not been realized to date. This research has required collaboration with Hills, Inc., to design a liquid metal-injection system to be combined with fiber spinning. The new system injects molten tin into each of 19 filaments being spun simultaneously into a single yarn. The resulting yarn contains 19 filaments, each with a tin core surrounded by a polymer sheath impregnated with thermochromic pigment. The color change we demonstrate is distinct from garments containing LEDs that emit light in various colors. The pigment itself changes its optical absorption spectrum to appear a different color. The thermochromic color-change is induced by a temperature change in the inner metal wire within each filament when current is applied from a small battery pack. The temperature necessary to induce the color change is near body temperature and not noticeable by touch. The prototypes already developed either use a simple push button to activate the battery pack or are wirelessly activated via a smart-phone app over Wi-Fi. The app allows the user to choose from different activation patterns of stripes that appear in the fabric continuously. The power requirements are mitigated by a large hysteresis in the activation temperature of the pigment and the temperature at which there is full color return. This was made possible by a collaboration with Chameleon International to develop a new, customized pigment. This technology enables a never-before seen capability: user-controlled, dynamic color and pattern change in large-area woven and sewn textiles and fabrics with wide-ranging applications from clothing and accessories to furniture and fixed-installation housing and business décor. The ability to activate through Wi-Fi opens up possibilities for the textiles to be part of the ‘Internet of Things.’ Furthermore, this technology is scalable to mass-production levels for wide-scale market adoption.Keywords: activation, appearance, color, manufacturing
Procedia PDF Downloads 27880 Intelligent Materials and Functional Aspects of Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape-memory alloys are a new class of functional materials with a peculiar property known as shape memory effect. These alloys return to a previously defined shape on heating after deformation in low temperature product phase region and take place in a class of functional materials due to this property. The origin of this phenomenon lies in the fact that the material changes its internal crystalline structure with changing temperature. Shape memory effect is based on martensitic transitions, which govern the remarkable changes in internal crystalline structure of materials. Martensitic transformation, which is a solid state phase transformation, occurs in thermal manner in material on cooling from high temperature parent phase region. This transformation is governed by changes in the crystalline structure of the material. Shape memory alloys cycle between original and deformed shapes in bulk level on heating and cooling, and can be used as a thermal actuator or temperature-sensitive elements due to this property. Martensitic transformations usually occur with the cooperative movement of atoms by means of lattice invariant shears. The ordered parent phase structures turn into twinned structures with this movement in crystallographic manner in thermal induced case. The twinned martensites turn into the twinned or oriented martensite by stressing the material at low temperature martensitic phase condition. The detwinned martensite turns into the parent phase structure on first heating, first cycle, and parent phase structures turn into the twinned and detwinned structures respectively in irreversible and reversible memory cases. On the other hand, shape memory materials are very important and useful in many interdisciplinary fields such as medicine, pharmacy, bioengineering, metallurgy and many engineering fields. The choice of material as well as actuator and sensor to combine it with the host structure is very essential to develop main materials and structures. Copper based alloys exhibit this property in metastable beta-phase region, which has bcc-based structures at high temperature parent phase field, and these structures martensitically turn into layered complex structures with lattice twinning following two ordered reactions on cooling. Martensitic transition occurs as self-accommodated martensite with inhomogeneous shears, lattice invariant shears which occur in two opposite directions, <110 > -type directions on the {110}-type plane of austenite matrix which is basal plane of martensite. This kind of shear can be called as {110}<110> -type mode and gives rise to the formation of layered structures, like 3R, 9R or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper based alloys which have the chemical compositions in weight; Cu-26.1%Zn 4%Al and Cu-11%Al-6%Mn. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit super lattice reflections inherited from parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that locations and intensities of diffraction peaks change with the aging time at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close each other.Keywords: Shape memory effect, martensite, twinning, detwinning, self-accommodation, layered structures
Procedia PDF Downloads 42679 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education
Authors: Liudmyla Vesper
Abstract:
The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem
Procedia PDF Downloads 6278 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6077 Fold and Thrust Belts Seismic Imaging and Interpretation
Authors: Sunjay
Abstract:
Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation
Procedia PDF Downloads 7076 Health and Climate Changes: "Ippocrate" a New Alert System to Monitor and Identify High Risk
Authors: A. Calabrese, V. F. Uricchio, D. di Noia, S. Favale, C. Caiati, G. P. Maggi, G. Donvito, D. Diacono, S. Tangaro, A. Italiano, E. Riezzo, M. Zippitelli, M. Toriello, E. Celiberti, D. Festa, A. Colaianni
Abstract:
Climate change has a severe impact on human health. There is a vast literature demonstrating temperature increase is causally related to cardiovascular problem and represents a high risk for human health, but there are not study that improve a solution. In this work, it is studied how the clime influenced the human parameter through the analysis of climatic conditions in an area of the Apulia Region: Capurso Municipality. At the same time, medical personnel involved identified a set of variables useful to define an index describing health condition. These scientific studies are the base of an innovative alert system, IPPOCRATE, whose aim is to asses climate risk and share information to population at risk to support prevention and mitigation actions. IPPOCRATE is an e-health system, it is designed to provide technological support to analysis of health risk related to climate and provide tools for prevention and management of critical events. It is the first integrated system of prevention of human risk caused by climate change. IPPOCRATE calculates risk weighting meteorological data with the vulnerability of monitored subjects and uses mobile and cloud technologies to acquire and share information on different data channels. It is composed of four components: Multichannel Hub. Multichannel Hub is the ICT infrastructure used to feed IPPOCRATE cloud with a different type of data coming from remote monitoring devices, or imported from meteorological databases. Such data are ingested, transformed and elaborated in order to be dispatched towards mobile app and VoIP phone systems. IPPOCRATE Multichannel Hub uses open communication protocols to create a set of APIs useful to interface IPPOCRATE with 3rd party applications. Internally, it uses non-relational paradigm to create flexible and highly scalable database. WeHeart and Smart Application The wearable device WeHeart is equipped with sensors designed to measure following biometric variables: heart rate, systolic blood pressure and diastolic blood pressure, blood oxygen saturation, body temperature and blood glucose for diabetic subjects. WeHeart is designed to be easy of use and non-invasive. For data acquisition, users need only to wear it and connect it to Smart Application by Bluetooth protocol. Easy Box was designed to take advantage from new technologies related to e-health care. EasyBox allows user to fully exploit all IPPOCRATE features. Its name, Easy Box, reveals its purpose of container for various devices that may be included depending on user needs. Territorial Registry is the IPPOCRATE web module reserved to medical personnel for monitoring, research and analysis activities. Territorial Registry allows to access to all information gathered by IPPOCRATE using GIS system in order to execute spatial analysis combining geographical data (climatological information and monitored data) with information regarding the clinical history of users and their personal details. Territorial Registry was designed for different type of users: control rooms managed by wide area health facilities, single health care center or single doctor. Territorial registry manages such hierarchy diversifying the access to system functionalities. IPPOCRATE is the first e-Health system focused on climate risk prevention.Keywords: climate change, health risk, new technological system
Procedia PDF Downloads 86775 Argos-Linked Fastloc GPS Reveals the Resting Activity of Migrating Sea Turtles
Authors: Gail Schofield, Antoine M. Dujon, Nicole Esteban, Rebecca M. Lester, Graeme C. Hays
Abstract:
Variation in diel movement patterns during migration provides information on the strategies used by animals to maximize energy efficiency and ensure the successful completion of migration. For instance, many flying and land-based terrestrial species stop to rest and refuel at regular intervals along the migratory route, or at transitory ‘stopover’ sites, depending on resource availability. However, in cases where stopping is not possible (such as over–or through deep–open oceans, or over deserts and mountains), non-stop travel is required, with animals needing to develop strategies to rest while actively traveling. Recent advances in biologging technologies have identified mid-flight micro sleeps by swifts in Africa during the 10-month non-breeding period, and the use of lateralized sleep behavior in orca and bottlenose dolphins during migration. Here, highly accurate locations obtained by Argos-linked Fastloc-GPS transmitters of adult green (n=8 turtles, 9487 locations) and loggerhead (n=46 turtles, 47,588 locations) sea turtles migrating around thousand kilometers (over several weeks) from breeding to foraging grounds across the Indian and Mediterranean oceans were used to identify potential resting strategies. Stopovers were only documented for seven turtles, lasting up to 6 days; thus, this strategy was not commonly used, possibly due to the lack of potential ‘shallow’ ( < 100 m seabed depth) sites along routes. However, observations of the day versus night speed of travel indicated that turtles might use other mechanisms to rest. For instance, turtles traveled an average 31% slower at night compared to day during oceanic crossings. Slower travel speeds at night might be explained by turtles swimming in a less direct line at night and/or deeper dives reducing their forward motion, as indicated through studies using Argos-linked transmitters and accelerometers. Furthermore, within the first 24 h of entering waters shallower than 100 m towards the end of migration (the depth at which sea turtles can swim and rest on the seabed), some individuals travelled 72% slower at night, repeating this behavior intermittently (each time for a one-night duration at 3–6-day intervals) until reaching the foraging grounds. If the turtles were, in fact, resting on the seabed at this point, they could be inactive for up to 8-hours, facilitating protracted periods of rest after several weeks of constant swimming. Turtles might not rest every night once within these shallower depths, due to the time constraints of reaching foraging grounds and restoring depleted energetic reserves (as sea turtles are capital breeders, they tend not to feed for several months during migration to and from the breeding grounds and while breeding). In conclusion, access to data-rich, highly accurate Argos-linked Fastloc-GPS provided information about differences in the day versus night activity at different stages of migration, allowing us, for the first time, to compare the strategies used by a marine vertebrate with terrestrial land-based and flying species. However, the question of what resting strategies are used by individuals that remain in oceanic waters to forage, with combinations of highly accurate Argos-linked Fastloc-GPS transmitters and accelerometry or time-depth recorders being required for sufficient numbers of individuals.Keywords: argos-linked fastloc GPS, data loggers, migration, resting strategy, telemetry
Procedia PDF Downloads 15674 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 773 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 23172 Mental Health Promotion for Children of Mentally Ill Parents in Schools. Assessment and Promotion of Teacher Mental Health Literacy in Order to Promote Child Related Mental Health (Teacher-MHL)
Authors: Dirk Bruland, Paulo Pinheiro, Ullrich Bauer
Abstract:
Introduction: Over 3 million children, about one quarter of all students, experience at least one parent with mental disorder in Germany every year. Children of mentally-ill parents are at considerably higher risk of developing serious mental health problems. The different burden patterns and coping attempts often become manifest in children's school lives. In this context, schools can have an important protective function, but can also create risk potentials. In reference to Jorm, pupil-related teachers’ mental health literacy (Teacher-MHL) includes the ability to recognize change behaviour, the knowledge of risk factors, the implementation of first aid intervention, and seeking professional help (teacher as gatekeeper). Although teachers’ knowledge and increased awareness of this topic is essential, the literature provides little information on the extent of teachers' abilities. As part of a German-wide research consortium on health literacy, this project, launched in March for 3 years, will conduct evidence-based mental health literacy research. The primary objective is to measure Teacher-MHL in the context of pupil-related psychosocial factors at primary and secondary schools (grades 5 & 6), while also focussing on children’s social living conditions. Methods: (1) A systematic literature review in different databases to identify papers with regard to Teacher-MHL (completed). (2) Based on these results, an interview guide was developed. This research step includes a qualitative pre-study to inductively survey the general profiles of teachers (n=24). The evaluation will be presented on the conference. (3) These findings will be translated into a quantitative teacher survey (n=2500) in order to assess the extent of socio-analytical skills of teachers as well as in relation to institutional and individual characteristics. (4) Based on results 1 – 3, developing a training program for teachers. Results: The review highlights a lack of information for Teacher-MHL and their skills, especially related to high-risk-groups like children of mentally ill parents. The literature is limited to a few studies only. According to these, teacher are not good at identifying burdened children and if they identify those children they do not know how to handle the situations in school. They are not sufficiently trained to deal with these children, especially there are great uncertainties in dealing with the teaching situation. Institutional means and resources are missing as well. Such a mismatch can result in insufficient support and use of opportunities for children at risk. First impressions from the interviews confirm these results and allow a greater insight in the everyday school-life according to critical life events in families. Conclusions: For the first time schools will be addressed as a setting where children are especially "accessible" for measures of health promotion. Addressing Teacher-MHL gives reason to expect high effectiveness. Targeting professionals' abilities for dealing with this high-risk-group leads to a discharge for teacher themselves to handle those situations and increases school health promotion. In view of the fact that only 10-30% of such high-risk families accept offers of therapy and assistance, this will be the first primary preventive and health-promoting approach to protect the health of a yet unaffected, but particularly burdened, high-risk group.Keywords: children of mentally ill parents, health promotion, mental health literacy, school
Procedia PDF Downloads 54471 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning
Procedia PDF Downloads 18170 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences
Authors: Ruth Leitch, Joanne Hughes
Abstract:
This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.Keywords: educational achievement, multiple deprivation, community case studies, social capital
Procedia PDF Downloads 38769 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 13768 Human-Carnivore Interaction: Patterns, Causes and Perceptions of Local Herders of Hoper Valley in Central Karakoram National Park, Pakistan
Authors: Saeed Abbas, Rahilla Tabassum, Haider Abbas, Babar Khan, Shahid Hussain, Muhammad Zafar Khan, Fazal Karim, Yawar Abbas, Rizwan Karim
Abstract:
Human–carnivore conflict is considered to be a major conservation and rural livelihood concern because many carnivore species have been heavily victimized due to elevated conflict levels with communities. Like other snow leopard range countries, this situation prevails in Pakistan, where WWF is currently working under Asia High Mountain Project (AHMP) in Gilgit-Baltistan of Pakistan. To mitigate such conflicts requires a firm understanding of grazing and predation pattern including human-carnivore interaction. For this purpose we conducted a survey in Hoper valley (one of the AHMP project sites in Pakistan), during August, 2013 through a questionnaire based survey and unstructured interviews covering 647 households, permanently residing in the project area out of the total 900 households. The valley, spread over 409 km2 between 36°7'46" N and 74°49'2"E, at 2900m asl in Karakoram ranges is considered to be one of an important habitat of snow leopard and associated prey species such as Himalayan ibex. The valley is home of 8100 Brusho people (ancient tribe of Northern Pakistan) dependent on agro-pastoral livelihoods including farming and livestock rearing. The total number of livestock reported were (N=15,481) out of which 8346 (53.91%) were sheep, 3546 (22.91%) goats, 2193 (14.16%) cows, 903 (5.83%) yaks, 508 (3.28%) bulls, 28 (0.18%) donkeys, 27 (0.17%) zo/zomo (cross breed of yak and cow), and 4 (0.03%) horses. 83 percent respondent (n=542 households) confirmed loss of their livestock during the last one year July, 2012 to June, 2013 which account for 2246 (14.51%) animals. The major reason of livestock loss include predation by large carnivores such as snow leopards and wolf (1710, 76.14%) followed by diseases (536, 23.86%). Of the total predation cases snow leopard is suspected to kill 1478 animals (86.43%). Among livestock sheep were found to be the major prey of snow leopard (810, 55%) followed by goats (484, 32.7%) cows (151, 10.21%), yaks (15, 1.015%), zo/zomo (7, 0.5%) and donkey (1, 0.07%). The reason for the mass depredation of sheep and goats is that they tend to browse on twigs of bushes and graze on soft grass near cliffs. They are also considered to be very active as compared to other species in moving quickly and covering more grazing area. This makes them more vulnerable to snow leopard attack. The majority (1283, 75%) of livestock killed by predators occurred during the warm season (May-September) in alpine and sub-alpine pastures and remaining (427, 25%) occurred in the winter season near settlements in valley. It was evident from the recent study that Snow leopard kills outside the pen were (1351, 79.76%) as compared to inside pen (359, 20.24%). Assessing the economic loss of livestock predation we found that the total loss of livestock predation in the study area is equal to PKR 11,230,000 (USD 105,797), which is about PRK 17, 357 (USD 163.51) per household per year. Economic loss incurred by the locals due to predation is quite significant where the average cash income per household per year is PKR 85,000 (USD 800.75).Keywords: carnivores, conflict, predation, livelihood, conservation, rural, snow leopard, livestock
Procedia PDF Downloads 34767 Synthesis and Properties of Poly(N-(sulfophenyl)aniline) Nanoflowers and Poly(N-(sulfophenyl)aniline) Nanofibers/Titanium dioxide Nanoparticles by Solid Phase Mechanochemical and Their Application in Hybrid Solar Cell
Authors: Mazaher Yarmohamadi-Vasel, Ali Reza Modarresi-Alama, Sahar Shabzendedara
Abstract:
Purpose/Objectives: The first purpose was synthesize Poly(N-(sulfophenyl)aniline) nanoflowers (PSANFLs) and Poly(N-(sulfophenyl)aniline) nanofibers/titanium dioxide nanoparticles ((PSANFs/TiO2NPs) by a solid-state mechano-chemical reaction and template-free method and use them in hybrid solar cell. Also, our second aim was to increase the solubility and the processability of conjugated nanomaterials in water through polar functionalized materials. poly[N-(4-sulfophenyl)aniline] is easily soluble in water because of the presence of polar groups of sulfonic acid in the polymer chain. Materials/Methods: Iron (III) chloride hexahydrate (FeCl3∙6H2O) were bought from Merck Millipore Company. Titanium oxide nanoparticles (TiO2, <20 nm, anatase) and Sodium diphenylamine-4-sulfonate (99%) were bought from Sigma-Aldrich Company. Titanium dioxide nanoparticles paste (PST-20T) was prepared from Sharifsolar Co. Conductive glasses coated with indium tin oxide (ITO) were bought from Xinyan Technology Co (China). For the first time we used the solid-state mechano-chemical reaction and template-free method to synthesize Poly(N-(sulfophenyl)aniline) nanoflowers. Moreover, for the first time we used the same technique to synthesize nanocomposite of Poly(N-(sulfophenyl)aniline) nanofibers and titanium dioxide nanoparticles (PSANFs/TiO2NPs) also for the first time this nanocomposite was synthesized. Examining the results of electrochemical calculations energy gap obtained by CV curves and UV–vis spectra demonstrate that PSANFs/TiO2NPs nanocomposite is a p-n type material that can be used in photovoltaic cells. Doctor blade method was used to creat films for three kinds of hybrid solar cells in terms of different patterns like ITO│TiO2NPs│Semiconductor sample│Al. In the following, hybrid photovoltaic cells in bilayer and bulk heterojunction structures were fabricated as ITO│TiO2NPs│PSANFLs│Al and ITO│TiO2NPs│PSANFs /TiO2NPs│Al, respectively. Fourier-transform infrared spectra, field emission scanning electron microscopy (FE-SEM), ultraviolet-visible spectra, cyclic voltammetry (CV) and electrical conductivity were the analysis that used to characterize the synthesized samples. Results and Conclusions: FE-SEM images clearly demonstrate that the morphology of the synthesized samples are nanostructured (nanoflowers and nanofibers). Electrochemical calculations of band gap from CV curves demonstrated that the forbidden band gap of the PSANFLs and PSANFs/TiO2NPs nanocomposite are 2.95 and 2.23 eV, respectively. I–V characteristics of hybrid solar cells and their power conversion efficiency (PCE) under 100 mWcm−2 irradiation (AM 1.5 global conditions) were measured that The PCE of the samples were 0.30 and 0.62%, respectively. At the end, all the results of solar cell analysis were discussed. To sum up, PSANFLs and PSANFLs/TiO2NPs were successfully synthesized by an affordable and straightforward mechanochemical reaction in solid-state under the green condition. The solubility and processability of the synthesized compounds have been improved compared to the previous work. We successfully fabricated hybrid photovoltaic cells of synthesized semiconductor nanostructured polymers and TiO2NPs as different architectures. We believe that the synthesized compounds can open inventive pathways for the development of other Poly(N-(sulfophenyl)aniline based hybrid materials (nanocomposites) proper for preparing new generation solar cells.Keywords: mechanochemical synthesis, PSANFLs, PSANFs/TiO2NPs, solar cell
Procedia PDF Downloads 6766 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14965 Origin of the Eocene Volcanic Rocks in Muradlu Village, Azerbaijan Province, Northwest of Iran
Authors: A. Shahriari, M. Khalatbari Jafari, M. Faridi
Abstract:
Abstract The Muradlu volcanic area is located in Azerbaijan province, NW Iran. The studied area exposed in a vast region includes lesser Caucasus, Southeastern Turkey, and northwestern Iran, comprising Cenozoic volcanic and plutonic massifs. The geology of this extended region was under the influence of the Alpine-Himalayan orogeny. Cenozoic magmatic activities in this vast region evolved through the northward subduction of the Neotethyan subducted slab and subsequence collision of the Arabian and Eurasian plates. Based on stratigraphy and paleontology data, most of the volcanic activities in the Muradlu area occurred in the Eocene period. The Studied volcanic rocks overly late Cretaceous limestone with disconformity. The volcanic sequence includes thick epiclastic and hyaloclastite breccia at the base, laterally changed to pillow lava and continued by hyaloclastite and lave flows at the top of the series. The lava flows display different textures from megaporphyric-phyric to fluidal and microlithic textures. The studied samples comprise picrobasalt basalt, tephrite basanite, trachybasalt, basaltic trachyandesite, phonotephrite, tephrophonolite, trachyandesite, and trachyte in compositions. Some xenoliths with lherzolitic composition are found in picrobasalt. These xenoliths are made of olivine, cpx (diopside), and opx (enstatite), probably the remain of mantle origin. Some feldspathoid minerals such as sodalite presence in the phonotephrite confirm an alkaline trend. Two types of augite phenocrysts are found in picrobasalt, basalt and trachybasalt. The first types are shapeless, with disharmony zoning and sponge texture with reaction edges probably resulted from sodic magma, which is affected by a potassic magma. The second shows a glomerocryst shape. In discriminative diagrams, the volcanic rocks show alkaline-shoshonitic trends. They contain (0.5-7.7) k2O values and plot in the shoshonitic field. Most of the samples display transitional to potassic alkaline trends, and some samples reveal sodic alkaline trends. The transitional trend probably results from the mixing of the sodic alkaline and potassic magmas. The Rare Earth Elements (REE) patterns and spider diagrams indicate enrichment of Large-Ione Lithophile Element (LILE) and depletion of High Field Strength Elements (HFSE) relative to Heavy Rare Earth Elements (HREE). Enrichment of K, Rb, Sr, Ba, Zr, Th, and U and the enrichment of Light Rare Earth Elements (LREE) relative to Heavy Rare Earth Elements (HREE) indicate the effect of subduction-related fluids over the mantle source, which has been reported in the arc and continental collision zones. The studied samples show low Nb/La ratios. Our studied samples plot in the lithosphere and lithosphere-asthenosphere fields in the Nb/La versus La/Yb ratios diagram. These geochemical characters allow us to conclude that a lithospheric mantle source previously metasomatized by subduction components was the origin of the Muradlu volcanic rocks.Keywords: alkaline, asthenosphere, lherzolite, lithosphere, Muradlu, potassic, shoshonitic, sodic, volcanism
Procedia PDF Downloads 17164 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 29263 The Roots of Amazonia’s Droughts and Floods: Complex Interactions of Pacific and Atlantic Sea-Surface Temperatures
Authors: Rosimeire Araújo Silva, Philip Martin Fearnside
Abstract:
Extreme droughts and floods in the Amazon have serious consequences for natural ecosystems and the human population in the region. The frequency of these events has increased in recent years, and projections of climate change predict greater frequency and intensity of these events. Understanding the links between these extreme events and different patterns of sea surface temperature in the Atlantic and Pacific Oceans is essential, both to improve the modeling of climate change and its consequences and to support efforts of adaptation in the region. The relationship between sea temperatures and events in the Amazon is much more complex than is usually assumed in climatic models. Warming and cooling of different parts of the oceans, as well as the interaction between simultaneous temperature changes in different parts of each ocean and between the two oceans, have specific consequences for the Amazon, with effects on precipitation that vary in different parts of the region. Simplistic generalities, such as the association between El Niño events and droughts in the Amazon, do not capture this complexity. We investigated the variability of Sea Surface Temperature (SST) in the Tropical Pacific Ocean during the period 1950-2022, using Empirical Orthogonal Functions (FOE), spectral analysis coherence and wavelet phase. The two were identified as the main modes of variability, which explain about 53,9% and 13,3%, respectively, of the total variance of the data. The spectral and coherence analysis and wavelets phase showed that the first selected mode represents the warming in the central part of the Pacific Ocean (the “Central El Niño”), while the second mode represents warming in the eastern part of the Pacific (the “Eastern El Niño The effects of the 1982-1983 and 1976-1977 El Niño events in the Amazon, although both events were characterized by an increase in sea surface temperatures in the Equatorial Pacific, the impact on rainfall in the Amazon was distinct. In the rainy season, from December to March, the sub-basins of the Japurá, Jutaí, Jatapu, Tapajós, Trombetas and Xingu rivers were the regions that showed the greatest reductions in rainfall associated with El Niño Central (1982-1983), while the sub-basins of the Javari, Purus, Negro and Madeira rivers had the most pronounced reductions in the year of Eastern El Niño (1976-1977). In the transition to the dry season, in April, the greatest reductions were associated with the Eastern El Niño year for the majority of the study region, with the exception only of the sub-basins of the Madeira, Trombetas and Xingu rivers, which had their associated reductions to Central El Niño. In the dry season from July to September, the sub-basins of the Japurá Jutaí Jatapu Javari Trombetas and Madeira rivers were the rivers that showed the greatest reductions in rainfall associated with El Niño Central, while the sub-basins of the Tapajós Purus Negro and Xingu rivers had the most pronounced reductions. In the Eastern El Niño year this season. In this way, it is possible to conclude that the Central (Eastern) El Niño controlled the reductions in soil moisture in the dry (rainy) season for all sub-basins shown in this study. Extreme drought events associated with these meteorological phenomena can lead to a significant increase in the occurrence of forest fires. These fires have a devastating impact on Amazonian vegetation, resulting in the irreparable loss of biodiversity and the release of large amounts of carbon stored in the forest, contributing to the increase in the greenhouse effect and global climate change.Keywords: sea surface temperature, variability, climate, Amazon
Procedia PDF Downloads 6362 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations
Procedia PDF Downloads 6261 Lack of Regulation Leads to Complexity: A Case Study of the Free Range Chicken Meat Sector in the Western Cape, South Africa
Authors: A. Coetzee, C. F. Kelly, E. Even-Zahav
Abstract:
Dominant approaches to livestock production are harmful to the environment, human health and animal welfare, yet global meat consumption is rising. Sustainable alternative production approaches are therefore urgently required, and ‘free range’ is the main alternative for chicken meat offered in South Africa (and globally). Although the South African Poultry Association provides non-binding guidelines, there is a lack of formal definition and regulation of free range chicken production, meaning it is unclear what this alternative entails and if it is consistently practised (a trend observed globally). The objective of this exploratory qualitative case study is therefore to investigate who and what determines free range chicken. The case study, conducted from a social constructivist worldview, uses semi-structured interviews, photographs and document analysis to collect data. Interviews are conducted with those involved with bringing free range chicken to the market - farmers, chefs, retailers, and regulators. Data is analysed using thematic analysis to establish dominant patterns in the data. The five major themes identified (based on prevalence in data and on achieving the research objective) are: 1) free range means a bird reared with good animal welfare in mind, 2) free range means quality meat, 3) free range means a profitable business, 4) free range is determined by decision makers or by access to markets, and 5) free range is coupled with concerns about the lack of regulation. Unpacking the findings in the context of the literature reveals who and what determines free range. The research uncovers wide-ranging interpretations of ‘free range’, driven by the absence of formal regulation for free range chicken practices and the lack of independent private certification. This means that the term ‘free range’ is socially constructed, thus varied and complex. The case study also shows that whether chicken meat is free range is generally determined by those who have access to markets. Large retailers claim adherence to the internationally recognised Five Freedoms, also include in the South African Poultry Association Code of Good Practice, which others in the sector say are too broad to be meaningful. Producers describe animal welfare concerns as the main driver for how they practice/view free range production, yet these interpretations vary. An additional driver is a focus on human health, which participants achieve mainly through the use of antibiotic-free feed, resulting in what participants regard as higher quality meat. The participants are also strongly driven by business imperatives, with most stating that free range chicken should carry a higher price than conventionally-reared chicken due to increased production costs. Recommendations from this study focus on, inter alia, a need to understand consumers’ perspectives on free range chicken, given that those in the sector claim they are responding to consumer demand, and conducting environmental research such as life cycle assessment studies to establish the true (environmental) sustainability of free range production. At present, it seems the sector mostly responds to social sustainability: human health and animal welfare.Keywords: chicken meat production, free range, socially constructed, sustainability
Procedia PDF Downloads 15760 Facies, Diagenetic Analysis and Sequence Stratigraphy of Habib Rahi Formation Dwelling in the Vicinity of Jacobabad Khairpur High, Southern Indus Basin, Pakistan
Authors: Muhammad Haris, Syed Kamran Ali, Mubeen Islam, Tariq Mehmood, Faisal Shah
Abstract:
Jacobabad Khairpur High, part of a Sukkur rift zone, is the separating boundary between Central and Southern Indus Basin, formed as a result of Post-Jurassic uplift after the deposition of Middle Jurassic Chiltan Formation. Habib Rahi Formation of Middle to Late Eocene outcrops in the vicinity of Jacobabad Khairpur High, a section at Rohri near Sukkur is measured in detail for lithofacies, microfacies, diagenetic analysis and sequence stratigraphy. Habib Rahi Formation is richly fossiliferous and consists of mostly limestone with subordinate clays and marl. The total thickness of the formation in this section is 28.8m. The bottom of the formation is not exposed, while the upper contact with the Sirki Shale of the Middle Eocene age is unconformable in some places. A section is measured using Jacob’s Staff method, and traverses were made perpendicular to the strike. Four different lithofacies were identified based on outcrop geology which includes coarse-grained limestone facies (HR-1 to HR-5), massive bedded limestone facies (HR-6 HR-7), and micritic limestone facies (HR-8 to HR-13) and algal dolomitic limestone facie (HR-14). Total 14 rock samples were collected from outcrop for detailed petrographic studies, and thin sections of respective samples were prepared and analyzed under the microscope. On the basis of Dunham’s (1962) classification systems after studying textures, grain size, and fossil content and using Folk’s (1959) classification system after reviewing Allochems type, four microfacies were identified. These microfacies include HR-MF 1: Benthonic Foraminiferal Wackstone/Biomicrite Microfacies, HR-MF 2: Foramineral Nummulites Wackstone-Packstone/Biomicrite Microfacies HR-MF 3: Benthonic Foraminiferal Packstone/Biomicrite Microfacies, HR-MF 4: Bioclasts Carbonate Mudstone/Micrite Microfacies. The abundance of larger benthic Foraminifera’s (LBF), including Assilina sp., A. spiral abrade, A. granulosa, A. dandotica, A. laminosa, Nummulite sp., N. fabiani, N. stratus, N. globulus, Textularia, Bioclasts, and Red algae indicates shallow marine (Tidal Flat) environment of deposition. Based on variations in rock types, grain size, and marina fauna Habib Rahi Formation shows progradational stacking patterns, which indicates coarsening upward cycles. The second order of sea-level rise is identified (spanning from Y-Persian to Bartonian age) that represents the Transgressive System Tract (TST) and a third-order Regressive System Tract (RST) (spanning from Bartonian to Priabonian age). Diagenetic processes include fossils replacement by mud, dolomitization, pressure dissolution associated stylolites features and filling with dark organic matter. The presence of the microfossils includes Nummulite. striatus, N. fabiani, and Assilina. dandotica, signify Bartonian to Priabonian age of Habib Rahi Formation.Keywords: Jacobabad Khairpur High, Habib Rahi Formation, lithofacies, microfacies, sequence stratigraphy, diagenetic history
Procedia PDF Downloads 472