Search results for: Gauss point numerical integration
1912 Migrant Entrepreneurs and Their Spark for Entrepreneurial Exploration
Authors: Adesuwa Omorede, Karin Axelsson
Abstract:
The war and violence around the world today has brought a mass increase of forcibly displaced individuals to seek refuge in the European Union, where they have to leave their homes and restart a new life built on other cultural, social, economic and legal premises than they are used to. Since 2014, the EU has accepted to help with the crisis by providing protection and refuge, and countries like Germany, Hungary, Austria, and Sweden accepted around two-thirds of EU’s asylum seekers. In 2015 for instance, Sweden harbored large numbers of refugees, which lead to a drastic rise in population. This drastic rise brought an overwhelming challenge to Sweden since they needed to find quick and suitable solutions to accommodate these thousands of refugees. Further, it posed a challenge for Sweden to immediately tackle the problem of integrating the new arrivals in the labor market. With an unstable societal integration and little or no skills to connect to the workforce, these immigrants faced a shaky beginning, as they had to struggle with not just integrating into a new society but also to get suitable jobs. These uncertainties brought pressure on the immigrants, which drove a number of them to move from city to city seeking for a place and alternatives for their well-being, safe haven, and self-provision. As a result, they brought in their own skills, experiences, and cultural orientation into exploring and exploiting new opportunities and filling the gaps in their new environment. In so doing, immigrants contributing with multidisciplinary collaborations, insights, international relations and national growth through the exploitation of entrepreneurial opportunities. The study, seek to understand how these uncertainties led migrant entrepreneurs towards entrepreneurial activities. Furthermore, it contributes to understanding their processes towards exploring and exploiting opportunities for entrepreneurship as well as their role in contributing to local and national growth. To reach these aims, an inductive qualitative study was conducted using semi-structured interviews of several migrant entrepreneurs – both female and male – that took part in two different entrepreneurial projects in mid-Sweden. The first project was a business program for African women; the other was an entrepreneurship hub for immigrants. Both were focused on inspiring and coaching immigrants during their entrepreneurial process. An integrated part was to work with the participants’ entrepreneurial skills and abilities. In addition, archival documents were collected. The data was analyzed using content analysis for qualitative research. The study aims to contribute to the entrepreneurship literature by understanding the influences of cognitive and environmental factors towards entrepreneurial activities. This study also provides several suggestions for policymakers on how they can better integrate migrants into becoming contributors to the society.Keywords: entrepreneurial intentions, entrepreneurial processes, migrant entrepreneurship, uncertainty
Procedia PDF Downloads 2031911 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser
Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt
Abstract:
This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet
Procedia PDF Downloads 2151910 Experimental Investigation on Performance of Beam Column Frames with Column Kickers
Authors: Saiada Fuadi Fancy, Fahim Ahmed, Shofiq Ahmed, Raquib Ahsan
Abstract:
The worldwide use of reinforced concrete construction stems from the wide availability of reinforcing steel as well as concrete ingredients. However, concrete construction requires a certain level of technology, expertise, and workmanship, particularly, in the field during construction. As a supporting technology for a concrete column or wall construction, kicker is cast as part of the slab or foundation to provide a convenient starting point for a wall or column ensuring integrity at this important junction. For that reason, a comprehensive study was carried out here to investigate the behavior of reinforced concrete frame with different kicker parameters. To achieve this objective, six half-scale specimens of portal reinforced concrete frame with kickers and one portal frame without kicker were constructed according to common practice in the industry and subjected to cyclic incremental horizontal loading with sustained gravity load. In this study, the experimental data, obtained in four deflections controlled cycle, were used to evaluate the behavior of kickers. Load-displacement characteristics were obtained; maximum loads and deflections were measured and assessed. Finally, the test results of frames constructed with three different types of kicker thickness were compared with the kickerless frame. Similar crack patterns were observed for all the specimens. From this investigation, specimens with kicker thickness 3″ were shown better results than specimens with kicker thickness 1.5″, which was specified by maximum load, stiffness, initiation of first crack and residual displacement. Despite of better performance, it could not be firmly concluded that 4.5″ kicker thickness is the most appropriate one. Because, during the test of that specimen, separation of dial gauge was needed. Finally, comparing with kickerless specimen, it was observed that performance of kickerless specimen was relatively better than kicker specimens.Keywords: crack, cyclic, kicker, load-displacement
Procedia PDF Downloads 3221909 Myosin-Driven Movement of Nanoparticles – An Approach to High-Speed Tracking
Authors: Sneha Kumari, Ravi Krishnan Elangovan
Abstract:
This abstract describes the development of a high-speed tracking method by modification in motor components for nanoparticle attachment. Myosin motors are nano-sized protein machines powering movement that defines life. These miniature molecular devices serve as engines utilizing chemical energy stored in ATP to produce useful mechanical energy in the form of a few nanometre displacement events leading to force generation that is required for cargo transport, cell division, cell locomotion, translated to macroscopic movements like running etc. With the advent of in vitro motility assay (IVMA), detailed functional studies of the actomyosin system could be performed. The major challenge with the currently available IVMA for tracking actin filaments is a resolution limitation of ± 50nm. To overcome this, we are trying to develop Single Molecule IVMA in which nanoparticle (GNP/QD) will be attached along or on the barbed end of actin filaments using CapZ protein and visualization by a compact TIRF module called ‘cTIRF’. The waveguide-based illumination by cTIRF offers a unique separation of excitation and collection optics, enabling imaging by scattering without emission filters. So, this technology is well equipped to perform tracking with high precision in temporal resolution of 2ms with significantly improved SNR by 100-fold as compared to conventional TIRF. Also, the nanoparticles (QD/GNP) attached to actin filament act as a point source of light coffering ease in filament tracking compared to conventional manual tracking. Moreover, the attachment of cargo (QD/GNP) to the thin filament paves the way for various nano-technological applications through their transportation to different predetermined locations on the chipKeywords: actin, cargo, IVMA, myosin motors and single-molecule system
Procedia PDF Downloads 891908 Applying Cognitive Psychology to Education: Translational Educational Science
Authors: Hammache Nadir
Abstract:
The scientific study of human learning and memory is now more than 125 years old. Psychologists have conducted thousands of experiments, correlational analyses, and field studies during this time, in addition to other research conducted by those from neighboring fields. A huge knowledge base has been carefully built up over the decades. Given this backdrop, we may ask ourselves: What great changes in education have resulted from this huge research base? How has the scientific study of learning and memory changed practices in education from those of, say, a century ago? Have we succeeded in building a translational educational science to rival medical science (in which biological knowledge is translated into medical practice) or types of engineering (in which, e.g., basic knowledge in chemistry is translated into products through chemical engineering)? The answer, I am afraid, is rather mixed. Psychologists and psychological research have influenced educational practice, but in fits and starts. After all, some of the great founders of American psychology—William James, Edward L. Thorndike, John Dewey, and others—are also revered as important figures in the history of education. And some psychological research and ideas have made their way into education—for instance, computer-based cognitive tutors for some specific topics have been developed in recent years—and in years past, such practices as teaching machines, programmed learning, and, in higher education, the Keller Plan were all important. These older practices have not been sustained. Was that because they failed or because of a lack of systematic research showing they were effective? At any rate, in 2012, we cannot point to a well-developed translational educational science in which research about learning and memory, thinking and reasoning, and related topics is moved from the lab into controlled field trials (like clinical trials in medicine) and the tested techniques, if they succeed, are introduced into broad educational practice. We are just not there yet, and one question that arises is how we could achieve a translational educational science.Keywords: affective, education, cognition, pshychology
Procedia PDF Downloads 3471907 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 1181906 Sustainable Crop Mechanization among Small Scale Rural Farmers in Nigeria: The Hurdles
Authors: Charles Iledun Oyewole
Abstract:
The daunting challenge that the ‘man with the hoe’ is going to face in the coming decades will be complex and interwoven. With global population already above 7 billion people, it has been estimated that food (crop) production must more than double by 2050 to meet up with the world’s food requirements. Nigeria population is also expected to reach over 240 million people by 2050, at the current annual population growth of 2.61 per cent. The country’s farming population is estimated at over 65 per cent, but the country still depends on food importation to complement production. The small scale farmer, who depends on simple hand tools: hoes and cutlasses, remains the centre of agricultural production, accounting for 90 per cent of the total agricultural output and 80 per cent of the market flow. While the hoe may have been a tool for sustainable development at a time in human history, this role has been smothered by population growth, which has brought too many mouths to be fed (over 170 million), as well as many industries to fuel with raw materials. It may then be argued that the hoe is unfortunately not a tool for the coming challenges and that agricultural mechanization should be the focus. However, agriculture as an enterprise is a ‘complete wheel’ which does not work when broken, particularly, in respect to mechanization. Generally, mechanization will prompt increase production, where land is readily available; increase production, will require post-harvest handling mechanisms, crop processing and subsequent storage. An important aspect of this is readily available and favourable markets for such produce; fuel by good agricultural policies. A break in this wheel will lead to the process of mechanization crashing back to subsistence production, and probably reversal to the hoe. The focus of any agricultural policy should be to chart a course for sustainable mechanization that is environmentally friendly, that may ameliorate Nigeria’s food and raw material gaps. This is the focal point of this article.Keywords: Crop production, Farmer, Hoes, Mechanization, Policy framework, Population, Growth, Rural areas
Procedia PDF Downloads 2271905 Epstein, Barr Virus Alters ATM-Dependent DNA Damage Responses in Germinal Centre B-Cells during Early Infection
Authors: Esther N. Maina, Anna Skowronska, Sridhar Chaganti, Malcolm A. Taylor, Paul G. Murray, Tatjana Stankovic
Abstract:
Epstein-Barr virus (EBV) has been implicated in the pathogenesis of human tumours of B-cell origin. The demonstration that a proportion of Hodgkin lymphomas and all Burkitt’s lymphomas harbour EBV suggests that the virus contributes to the development of these malignancies. However, the mechanisms of lymphomagenesis remain largely unknown. To determine whether EBV causes DNA damage and alters DNA damage response in cells of EBV-driven lymphoma origin, Germinal Centre (GC) B cells were infected with EBV and DNA damage responses to gamma ionising radiation (IR) assessed at early time points (12hr – 72hr) after infection and prior to establishment of lymphoblastoid (LCL) cell lines. In the presence of EBV, we observed induction of spontaneous DNA DSBs and downregulation of ATM-dependent phosphorylation in response to IR. This downregulation coincided with reduced ability of infected cells to repair IR-induced DNA double-strand breaks, as measured by the kinetics of gamma H2AX, a marker of double-strand breaks, and by the tail moment of the comet assay. Furthermore, we found that alteration of DNA damage responses coincided with the expression of LMP-1 protein. The presence of the EBV virus did not affect the localization of the ATM-dependent DNA repair proteins to sites of damage but instead lead to an increased expression of PP5, a phosphatase that regulates ATM function. The impact of the virus on DNA repair was most prominent 24h after infection, suggesting that this time point is crucial for the viral establishment in B cells. Our results suggest that during an early infection EBV virus dampens crucial cellular responses to DNA double-strand breaks which facilitate successful viral infection, but at the same time might provide the mechanism for tumor development.Keywords: EBV, ATM, DNA damage, germinal center cells
Procedia PDF Downloads 3511904 Artificial Intelligence and Governance in Relevance to Satellites in Space
Authors: Anwesha Pathak
Abstract:
With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.Keywords: satellite, space debris, traffic, threats, cyber security.
Procedia PDF Downloads 791903 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR
Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman
Abstract:
Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography
Procedia PDF Downloads 4991902 Expression Level of Dehydration-Responsive Element Binding/DREB Gene of Some Local Corn Cultivars from Kisar Island-Maluku Indonesia Using Quantitative Real-Time PCR
Authors: Hermalina Sinay, Estri L. Arumingtyas
Abstract:
The research objective was to determine the expression level of dehydration responsive element binding/DREB gene of local corn cultivars from Kisar Island Maluku. The study design was a randomized block design with single factor consist of six local corn cultivars obtained from farmers in Kisar Island and one reference varieties wich has been released by the government as a drought-tolerant varieties and obtained from Cereal Crops Research Institute (ICERI) Maros South Sulawesi. Leaf samples were taken is the second leaf after the flag leaf at the 65 days after planting. Isolation of total RNA from leaf samples was carried out according to the protocols of the R & A-BlueTM Total RNA Extraction Kit and was used as a template for cDNA synthesis. The making of cDNA from total RNA was carried out according to the protocol of One-Step Reverse Transcriptase PCR Premix Kit. Real Time-PCR was performed on cDNA from reverse transcription followed the procedures of Real MODTM Green Real-Time PCR Master Mix Kit. Data obtained from the real time-PCR results were analyzed using relative quantification method based on the critical point / Cycle Threshold (CP / CT). The results of gene expression analysis of DREB gene showed that the expression level of the gene was highest obtained at Deep Yellow local corn cultivar, and the lowest one was obtained at the Rubby Brown Cob cultivar. It can be concluded that the expression level of DREB gene of Deep Yellow local corn cultivar was highest than other local corn cultivars and Srikandi variety as a reference variety.Keywords: expression, level, DREB gene, local corn cultivars, Kisar Island, Maluku
Procedia PDF Downloads 3011901 The Association of Slope Failure and Lineament Density along the Ranau-Tambunan Road, Sabah, Malaysia
Authors: Norbert Simon, Rodeano Roslee, Abdul Ghani Rafek, Goh Thian Lai, Azimah Hussein, Lee Khai Ern
Abstract:
The 54 km stretch of Ranau-Tambunan (RTM) road in Sabah is subjected to slope failures almost every year. This study is focusing on identifying section of roads that are susceptible to failure based on temporal landslide density and lineament density analyses. In addition to the analyses, the rock slopes in several sections of the road were assessed using the geological strength index (GSI) technique. The analysis involved 148 landslides that were obtained in 1978, 1994, 2009 and 2011. The landslides were digitized as points and the point density was calculated based on every 1km2 of the road. The lineaments of the area was interpreted from Landsat 7 15m panchromatic band. The lineament density was later calculated based on every 1km2 of the area using similar technique with the slope failure density calculation. The landslide and lineament densities were classified into three different classes that indicate the level of susceptibility (low, moderate, high). Subsequently, the two density maps were overlap to produce the final susceptibility map. The combination of both high susceptibility classes from these maps signifies the high potential of slope failure in those locations in the future. The final susceptibility map indicates that there are 22 sections of the road that are highly susceptible. Seven rock slopes were assessed along the RTM road using the GSI technique. It was found from the assessment that rock slopes along this road are highly fractured, weathered and can be classified into fair to poor categories. The poor condition of the rock slope can be attributed to the high lineament density that presence in the study area. Six of the rock slopes are located in the high susceptibility zones. A detailed investigation on the 22 high susceptibility sections of the RTM road should be conducted due to their higher susceptibility to failure, in order to prevent untoward incident to road users in the future.Keywords: GSI, landslide, landslide density, landslide susceptibility, lineament density
Procedia PDF Downloads 3991900 Trump’s COVID-19 Discourse: Downgrading the Fundamentals of the Political Fair Play
Authors: Gustavo Naranjo Maroto, Dolores Fernandez Martinez
Abstract:
Context has always been essential to understand any reaction from every human being, and words, whether written or spoken, are definitely a powerful representative sample of human reaction. This study starts with an accurate breakdown of the context in which the current president of the US, Mr. Donald J. Trump is conveying his discourses in order to be able to judge them from a critical discourse analysis point of view. The present world’s scenario with a pandemic disease in form of Covid-19 that is threatening the world and certainly putting at risk the so called 'Welfare State', the role of the United States as the first superpower on earth nowadays, the very peculiar profile of President Trump not only as a politician but as a persona, and the fact of being on the verge of a very controversial presidential elections are without doubt a great and undeniable opportunity for the implementation of the critical discourse analysis methodology. Hence, this research will primarily analyze in detail some of the most interesting discourses delivered by Trump in different media since the very beginning of the outbreak of the coronavirus pandemic in the United States of America (February, 2020), sadly very often downplayed by President Trump, until the final result of the upcoming presidential election scheduled for Tuesday, November 3, 2020, where the political discourse has been dramatically downgraded to a very dangerous state, putting in jeopardy the fundamentals of the political fair play in terms of speech. Finally, the study will hopefully conclude with the final outcome of the data analyzed, allowing to picture how significant the context can be concerning linguistics on the one hand, in terms of shaping or altering the message that the issuer thought to convey in the first place, and on the other hand, generously assessing to what extend the recipients of the message are influenced by the message in terms of receptiveness.Keywords: Covid-19, critical discourse analysis, Donald J. Trump, political discourse
Procedia PDF Downloads 1331899 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 3141898 Review and Analysis of Parkinson's Tremor Genesis Using Mathematical Model
Authors: Pawan Kumar Gupta, Sumana Ghosh
Abstract:
Parkinson's Disease (PD) is a long-term neurodegenerative movement disorder of the central nervous system with vast symptoms related to the motor system. The common symptoms of PD are tremor, rigidity, bradykinesia/akinesia, and postural instability, but the clinical symptom includes other motor and non‐motor issues. The motor symptoms of the disease are consequence of death of the neurons in a region of the midbrain known as substantia nigra pars compacta, leading to decreased level of a neurotransmitter known as dopamine. The cause of this neuron death is not clearly known but involves formation of Lewy bodies, an abnormal aggregation or clumping of the protein alpha-synuclein in the neurons. Unfortunately, there is no cure for PD, and the management of this disease is challenging. Therefore, it is critical for a patient to be diagnosed at early stages. A limited choice of drugs is available to improve the symptoms, but those become less and less effective over time. Apart from that, with rapid growth in the field of science and technology, other methods such as multi-area brain stimulation are used to treat patients. In order to develop advanced techniques and to support drug development for treating PD patients, an accurate mathematical model is needed to explain the underlying relationship of dopamine secretion in the brain with the hand tremors. There has been a lot of effort in the past few decades on modeling PD tremors and treatment effects from a computational point of view. These models can effectively save time as well as the cost of drug development for the pharmaceutical industry and be helpful for selecting appropriate treatment mechanisms among all possible options. In this review paper, an effort is made to investigate studies on PD modeling and analysis and to highlight some of the key advances in the field over the past centuries with discussion on the current challenges.Keywords: Parkinson's disease, deep brain stimulation, tremor, modeling
Procedia PDF Downloads 1411897 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models
Authors: A. B. M. Rezaul Islam, Ernur Karadogan
Abstract:
Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis
Procedia PDF Downloads 1471896 Construction of a Dynamic Model of Cerebral Blood Circulation for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki
Abstract:
Currently, brain resuscitation becomes increasingly important due to revising various clinical guidelines pertinent to emergency care. In brain resuscitation, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) is required for stabilizing physiological state of brain, and is described as the essential treatment points in many guidelines of disorder and/or disease such as brain injury, stroke, and encephalopathy. Thus, an integrated control system of BT, ICP, and CBF will greatly contribute to alleviating the burden on medical staff and improving treatment effect in brain resuscitation. In order to develop such a control system, models related to BT, ICP, and CBF are required for control simulation, because trial and error experiments using patients are not ethically allowed. A static model of cerebral blood circulation from intracranial arteries and vertebral artery to jugular veins has already constructed and verified. However, it is impossible to represent the pooling of blood in blood vessels, which is one cause of cerebral hypertension in this model. And, it is also impossible to represent the pulsing motion of blood vessels caused by blood pressure change which can have an affect on the change of cerebral tissue pressure. Thus, a dynamic model of cerebral blood circulation is constructed in consideration of the elasticity of the blood vessel and the inertia of the blood vessel wall. The constructed dynamic model was numerically analyzed using the normal data, in which each arterial blood flow in cerebral blood circulation, the distribution of blood pressure in the Circle of Willis, and the change of blood pressure along blood flow were calculated for verifying against physiological knowledge. As the result, because each calculated numerical value falling within the generally known normal range, this model has no problem in representing at least the normal physiological state of the brain. It is the next task to verify the accuracy of the present model in the case of disease or disorder. Currently, the construction of a migration model of extracellular fluid and a model of heat transfer in cerebral tissue are in progress for making them parts of an integrated model of brain physiological state, which is necessary for developing an future integrated control system of BT, ICP and CBF. The present model is applicable to constructing the integrated model representing at least the normal condition of brain physiological state by uniting with such models.Keywords: dynamic model, cerebral blood circulation, brain resuscitation, automatic control
Procedia PDF Downloads 1551895 An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment
Authors: Paul Lam, Kevin Wong, Chi Him Chan
Abstract:
Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening.Keywords: artificial Intelligence, mobile learning, oral assessment, oral practice, speech-to-text function
Procedia PDF Downloads 1051894 Decisional Regret in Men with Localized Prostate Cancer among Various Treatment Options and the Association with Erectile Functioning and Depressive Symptoms: A Moderation Analysis
Authors: Caren Hilger, Silke Burkert, Friederike Kendel
Abstract:
Men with localized prostate cancer (PCa) have to choose among different treatment options, such as active surveillance (AS) and radical prostatectomy (RP). All available treatment options may be accompanied by specific psychological or physiological side effects. Depending on the nature and extent of these side effects, patients are more or less likely to be satisfied or to struggle with their treatment decision in the long term. Therefore, the aim of this study was to assess and explain decisional regret in men with localized PCa. The role of erectile functioning as one of the main physiological side effects of invasive PCa treatment, depressive symptoms as a common psychological side effect, and the association of erectile functioning and depressive symptoms with decisional regret were investigated. Men with localized PCa initially managed with AS or RP (N=292) were matched according to length of therapy (mean 47.9±15.4 months). Subjects completed mailed questionnaires assessing decisional regret, changes in erectile functioning, depressive symptoms, and sociodemographic variables. Clinical data were obtained from case report forms. Differences among the two treatment groups (AS and RP) were calculated using t-tests and χ²-tests, relationships of decisional regret with erectile functioning and depressive symptoms were computed using multiple regression. Men were on average 70±7.2 years old. The two treatment groups differed markedly regarding decisional regret (p<.001, d=.50), changes in erectile functioning (p<.001, d=1.2), and depressive symptoms (p=.01, d=.30), with men after RP reporting higher values, respectively. Regression analyses showed that after adjustment for age, tumor risk category, and changes in erectile functioning, depressive symptoms were still significantly associated with decisional regret (B=0.52, p<.001). Additionally, when predicting decisional regret, the interaction of changes in erectile functioning and depressive symptoms reached significance for men after RP (B=0.52, p<.001), but not for men under AS (B=-0.16, p=.14). With increased changes in erectile functioning, the association of depressive symptoms with decisional regret became stronger in men after RP. Decisional regret is a phenomenon more prominent in men after RP than in men under AS. Erectile functioning and depressive symptoms interact in their prediction of decisional regret. Screening and treating depressive symptoms might constitute a starting point for interventions aiming to reduce decisional regret in this target group.Keywords: active surveillance, decisional regret, depressive symptoms, erectile functioning, prostate cancer, radical prostatectomy
Procedia PDF Downloads 2181893 An Elaborated Software Solution: The Tennis Ranking System
Authors: Dionysios Kakaroumpas, Jesseka Farago, Stephen Webber
Abstract:
Athletes and spectators depend on the tennis ranking system to represent the truest caliber of athletic prowess; a careful look at the current ranking system though, reveals its main weakness: it undermines expectations of fans and players. Our study proposes several key changes to the existing ranking formula that provide a fair and accurate approach to measure player performance. The study proposes a modification of the system to value: participation, continued advancement, and overall achievement. The new ranking formula facilitates closing the trust gap, encouraging competition equality, engaging the fan base, attracting investment, and promoting tennis involvement worldwide. To probe the crux of our main contention we performed week-by-week comparisons between results procured from the current and proposed formulae. After performing this rigorous case-study of top players of each gender, the findings strongly indicated that there is identifiable inflation in the ranks and enhanced the conviction that the current system should be updated. The new system is accompanied by a web-based software package freely available to anyone involved or interested in tennis rankings. The software package is designed to automatically calculate new player rankings based on a responsive, multi-faceted formula that also generates projected point scenarios and provides separate rankings for the three different court surfaces. By taking a critical look at the current tennis ranking system with consideration to the perspective of fans, players, and businesses involved, an upgrade is in order for it to maintain the balance of trust between fans and the evaluation process. In closure, this proposed solution increases fair play competition, eliminates rank inflation, and better engages fans, players, and sponsors by bringing in a new era of professional tennis.Keywords: measurement and evaluation, rules and regulations, sports management and marketing, tennis ranking system
Procedia PDF Downloads 2721892 Immediate Effect of Transcutaneous Electrical Nerves Stimulation on Flexibility and Health Status in Patients with Chronic Nonspecific Low Back Pain (A Pilot Study)
Authors: Narupon Kunbootsri, Patpiya Sirasaporn
Abstract:
Low back pain is the most common of chief complaints in chronic pain. Low back pain directly affect to activities daily living and also has high socioeconomic costs. The prevalence of low back pain is high in both genders in all populations. The symptoms of low back pain including, pain at low back area, muscle spasm, tenderness points and stiff back. Trancutanous Electrical Nerve Stimulation (TENS) is one of modalities mainly use for control pain. There was indicated that TENS is wildly use in low back pain, but no scientific data about the flexibility of muscle after TENS in low back pain. Thus the aim of this study was to investigate immediate effect of TENS on flexibility and health status in patients with chronic nonspecific low back pain. Eight chronic nonspecific low back pain patients 1 male and 7 female employed in this study. Participants were diagnosed by a doctor based on history and physical examination. Each participant received treatment at physiotherapy unit. Participants completed Roland Morris Disability Questionnaire (RMDQ), numeric rating scale (NRS) and trunk flexibility before treatment. Each participant received low frequency TENS set at asymmetrical, 10 Hz, 20 minutes per point. Immediately after treatment, participants completed RNS, RMDQ and trunk flexibility again. All participants were treated by only one physiotherapist. There was a statistically significant increased in flexibility immediately after low frequency TENS [mean difference -6.37 with 95%CI were (-8.35)-(-4.39)]. There was a statistically significant decreased in numeric rating scale [mean difference 2.13 with 95%CI were 1.08-3.16]. Roland Morris Disability Questionnaire showed improvement of health status average 44.8% immediately after treatment. In conclusion, the results of the present study indicate that immediately effect after low frequency TENS can decrease pain and improve flexibility of back muscle in chronic nonspecific low back pain patients.Keywords: low back pain, flexibility, TENS, chronic
Procedia PDF Downloads 5591891 The Impact of Access to Finances on Survival of Small and Medium Enterprises: The South African Perspective in an Covid-19 Era
Authors: Thabiso Sthembiso Msomi
Abstract:
SMEs are the main engine of growth in most developing economies. One of the main factors that hinder the development of SME is access to finance. In this study, we explored the factors that hinder the growth and survival of SMEs in South Africa. The capital structure theory formed the theoretical underpinning for the study. The quantitative research design was adopted and data was collected from retail, construction, manufacturing and agriculture sectors of SMEs within the KwaZulu-Natal province of South Africa. The modified version of the Cochran formula was used to determine the sample size as 321 SMEs and analysed using the five-point Likert scale. The purposive sampling technique was used to select owners of SME. Statistical Package for the Social Sciences (SPSS) was used for the data analysis through Exploratory Factor Analysis (EFA) to determine the factor structures of items employed to measure each of the constructs in this study. Then, the Cronbach’s alpha test was conducted to determine the reliability of each construct. Kaiser-Meyer-Olkin (KMO) was used to determine the adequacy of the sample size. Linear regression was done to determine the effect of the independent variables on the dependent variable. The findings suggest that the main constraints facing South African SMEs were the lack of experienced management. Furthermore, the SMEs would fail to raise customer awareness of their products and services, which in turn affects their market access and monthly turnover. The study recommends that SMEs keep up-to-date records of business transactions to enable the business to keep track of its operations. The study recommends that South African banks adopt an SME accounting and bookkeeping program. The finding of this study benefits policymakers in both the private and public sectors.Keywords: small businesses, access to finances, COVID-19, SMEs survival
Procedia PDF Downloads 1781890 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor
Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud
Abstract:
Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification
Procedia PDF Downloads 1311889 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 361888 Using the Ecological Analysis Method to Justify the Environmental Feasibility of Biohydrogen Production from Cassava Wastewater Biogas
Authors: Jonni Guiller Madeira, Angel Sanchez Delgado, Ronney Mancebo Boloy
Abstract:
The use bioenergy, in recent years, has become a good alternative to reduce the emission of polluting gases. Several Brazilian and foreign companies are doing studies related to waste management as an essential tool in the search for energy efficiency, taking into consideration, also, the ecological aspect. Brazil is one of the largest cassava producers in the world; the cassava sub-products are the food base of millions of Brazilians. The repertoire of results about the ecological impact of the production, by steam reforming, of biohydrogen from cassava wastewater biogas is very limited because, in general, this commodity is more common in underdeveloped countries. This hydrogen, produced from cassava wastewater, appears as an alternative fuel to fossil fuels since this is a low-cost carbon source. This paper evaluates the environmental impact of biohydrogen production, by steam reforming, from cassava wastewater biogas. The ecological efficiency methodology developed by Cardu and Baica was used as a benchmark in this study. The methodology mainly assesses the emissions of equivalent carbon dioxide (CO₂, SOₓ, CH₄ and particulate matter). As a result, some environmental parameters, such as equivalent carbon dioxide emissions, pollutant indicator, and ecological efficiency are evaluated due to the fact that they are important to energy production. The average values of the environmental parameters among different biogas compositions (different concentrations of methane) were calculated, the average pollution indicator was 10.11 kgCO₂e/kgH₂ with an average ecological efficiency of 93.37%. As a conclusion, bioenergy production using biohydrogen from cassava wastewater treatment plant is a good option from the environmental feasibility point of view. This fact can be justified by the determination of environmental parameters and comparison of the environmental parameters of hydrogen production via steam reforming from different types of fuels.Keywords: biohydrogen, ecological efficiency, cassava, pollution indicator
Procedia PDF Downloads 1991887 The Implication of Disaster Risk Identification to Cultural Heritage-The Scenarios of Flood Risk in Taiwan
Authors: Jieh-Jiuh Wang
Abstract:
Disasters happen frequently due to the global climate changes today. The cultural heritage conservation should be considered from the perspectives of surrounding environments and large-scale disasters. Most current thoughts about the disaster prevention of cultural heritages in Taiwan are single-point thoughts emphasizing firefighting, decay prevention, and construction reinforcement and ignoring the whole concept of the environment. The traditional conservation cannot defend against more and more tremendous and frequent natural disasters caused by climate changes. More and more cultural heritages are confronting the high risk of disasters. This study adopts the perspective of risk identification and takes flood as the main disaster category. It analyzes the amount and categories of cultural heritages that might suffer from disasters with the geographic information system integrating the latest flooding potential data from National Fire Agency and Water Resources Agency and the basic data of cultural heritages. It examines the actual risk of cultural heritages confronting floods and serves as the accordance for future considerations of risk measures and preparation for reducing disasters. The result of the study finds the positive relationship between the disaster affected situation of national cultural heritages and the rainfall intensity. The order of impacted level by floods is historical buildings, historical sites indicated by municipalities and counties, and national historical sites and relics. However, traditional settlements and cultural landscapes are not impacted. It might be related to the taboo space in the traditional culture of site selection (concepts of disaster avoidance). As for the regional distribution on the other hand, cultural heritages in central and northern Taiwan suffer from more shocking floods, while the heritages in northern and eastern Taiwan suffer from more serious flooding depth.Keywords: cultural heritage, flood, preventive conservation, risk management
Procedia PDF Downloads 3391886 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore
Authors: Qiao-Yu Warren Cai
Abstract:
Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans
Procedia PDF Downloads 1071885 Application of Neutron Activation Analysis Technique for the Analysis of Soil Samples from Farmlands of Yebrage Hawariat, East Gojjam, Ethiopia
Authors: Yihunie Hibstie Asres, Manny Mathuthu
Abstract:
Farmers may not be conscious for their farmland’s nutrients, soil organic matter, water and air because they simply concerned only for their labor availability and soil fertility losses. The composition and proportion of these components greatly influence soil physical properties, including texture, structure, and porosity, the fraction of pore space in a soil. The soil of this farmland must be able to supply adequate amount of plant nutrients, in forms which can be absorbed by the crop, within its lifespan. Deficiencies or imbalances in the supply of any of essential elements can compromise growth, affecting root development, cell division, crop quality, crop yield and resistance to disease and drought. This study was conducted to fill this knowledge gap in order to develop economically vital and environmentally accepted nutrient management strategies for the use of soils in agricultural lands. The objective of this study is to assess the elemental contents and concentration of soil samples collected from farmlands of ‘Yebrage’ using Neutron Activation Analysis (NAA) techniques regardless of oxidation state, chemical form or physical locations. NAA is used to determine the elemental composition and concentrations present in a soil. The macro/micronutrient and organic matter deficiencies have been verified in agricultural soils through increased use of soil testing and plant analysis. The challenge for agriculture over the coming decades will meet the world’s increasing demands for food in a sustainable way. Current issues and future challenges point out that as long as agriculture remains a soil-based industry, major decreases in productivity likely to be attained ensuring that plants do not have adequate and balanced supply of nutrients.Keywords: NAA, Yebrage, Chemoga, macro/micronutrient
Procedia PDF Downloads 1781884 Healing the Scars of the Past: The Great Challenge and Failed Attempt of European Union to Create a Supranational Identity
Authors: David Martínez Rico, Juan Pablo Farid Cuéllar Martínez
Abstract:
After more than half a century that the first treaty of European cooperation was created, the final result of a difficult and long historical process, which is the current European Union, is facing economical and social challenges. The barriers of policies differences and national sovereignties seem to be being defeated in the last and present decades. However, the last crisis of 2008 brought back problems as xenophobia and nationalism. In this ambit of identity, European Union has made many efforts to reinforce a European identity and leave behind the radical nationalisms which generated World Wars. Nevertheless, these social problems are increasing and becoming more present in the life of many Europeans. Even, in the last Euro Parliamentarian Elections of the present year, 2014, the extreme right parties, in favor of xenophobic and anti European ideals, got more seats and are increasing their presence in Euro Parliament. This essay approaches to this controversial topic of European identity. Taking as start point the nationalist divisions that are causing internal divergences in Europe, the authors of this research study the role and contributions of the Memorials of the fallen soldiers and heroes of World Wars, present in many cities as Amsterdam, Brussels and Paris, to the impossibility to reach an European identity, it means that Europeans feel first part of Europe in place to feel first part of a nation. The objective of this essay is to reaffirm the thesis that establishes that the European Union won´t reach the longed supranational identity with just with the current strategies, because yet there are many cultural elements in its member states societies which exalt the heroes and soldiers of the past wars, increasing nationalism feelings. Besides, in it are promoted some interesting ideas that could change the course in this quest of a European social identity.Keywords: identity, memorials, European identity, nationalism, proposals
Procedia PDF Downloads 4251883 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 75