Search results for: game making
1468 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 731467 Intentions and Willingness of Marketing Professionals to Adopt Neuromarketing
Authors: Anka Gorgiev, Chris Martin, Nikolaos Dimitriadis, Dimitrios V. Nikolaidis
Abstract:
This paper is part of a doctoral research study aimed to identify behavioral indicators for the existence of the new marketing paradigm. Neuromarketing is becoming a growing trend in the marketing industry worldwide and it is capturing a lot of interest among the members of academia and the practitioner community. However, it is still not very clear how big of an impact neuromarketing might have in the following years. In an effort to get closer to an answer, this study investigates behavioral intentions and willingness to adopt neuromarketing and its practices by the marketing professionals, including academics, practitioners, students, researchers, experts and journal editors. The participants in the study include marketing professionals at different levels of neuromarketing fluency with residency in the United States of America and the South East Europe. The total of 19 participants participated in the interviews, all of whom belong to more than one group of marketing professionals. The authors use qualitative research approach and open-ended interview questions specifically developed to assess ideas, beliefs and opinions that marketing professionals hold towards neuromarketing. In constructing the interview questions, the authors have used the theory of planned behavior, the prototype willingness model and the technology acceptance model as a theoretical framework. Previous studies have not explicitly investigated the behavioral intentions of marketing professionals to engage in neuromarketing behavior, which is described here as a tendency to apply neuromarketing assumptions and tools in usual marketing practices. This study suggests that the marketing professionals believe that neuromarketing can contribute to the business in a positive way and outlines the main advantages and disadvantages of adopting neuromarketing as identified by the participants. In addition, the study reveals an emerging image of an exemplar company that is perceived to be using neuromarketing, including the most common characteristics and attributes. These findings are believed to be crucial in facilitating a way for neuromarketing field to have a broader impact than it currently does by recognizing and understanding the limitations that such exemplars imply and how that has an effect on the decision-making of marketing professionals.Keywords: behavioral intentions, marketing paradigm, neuromarketing adoption, theory of planned behavior
Procedia PDF Downloads 1721466 Wet Processing of Algae for Protein and Carbohydrate Recovery as Co-Product of Algal Oil
Authors: Sahil Kumar, Rajaram Ghadge, Ramesh Bhujade
Abstract:
Historically, lipid extraction from dried algal biomass remained a focus area of the algal research. It has been realized over the past few years that the lipid-centric approach and conversion technologies that require dry algal biomass have several challenges. Algal culture in cultivation systems contains more than 99% water, with algal concentrations of just a few hundred milligrams per liter ( < 0.05 wt%), which makes harvesting and drying energy intensive. Drying the algal biomass followed by extraction also entails the loss of water and nutrients. In view of these challenges, focus has shifted toward developing processes that will enable oil production from wet algal biomass without drying. Hydrothermal liquefaction (HTL), an emerging technology, is a thermo-chemical conversion process that converts wet biomass to oil and gas using water as a solvent at high temperature and high pressure. HTL processes wet algal slurry containing more than 80% water and significantly reduces the adverse cost impact owing to drying the algal biomass. HTL, being inherently feedstock agnostic, i.e., can convert carbohydrates and proteins also to fuels and recovers water and nutrients. It is most effective with low-lipid (10--30%) algal biomass, and bio-crude yield is two to four times higher than the lipid content in the feedstock. In the early 2010s, research remained focused on increasing the oil yield by optimizing the process conditions of HTL. However, various techno-economic studies showed that simply converting algal biomass to only oil does not make economic sense, particularly in view of low crude oil prices. Making the best use of every component of algae is a key for economic viability of algal to oil process. On investigation of HTL reactions at the molecular level, it has been observed that sequential HTL has the potential to recover value-added products along with biocrude and improve the overall economics of the process. This potential of sequential HTL makes it a most promising technology for converting wet waste to wealth. In this presentation, we will share our experience on the techno-economic and engineering aspects of sequential HTL for conversion of algal biomass to algal bio-oil and co-products.Keywords: algae, biomass, lipid, protein
Procedia PDF Downloads 2141465 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 1301464 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril
Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro
Abstract:
Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics
Procedia PDF Downloads 921463 The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples
Authors: Jeremy Attard, Jordan François, Serge Lazzarini, Thierry Masson
Abstract:
Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.Keywords: BEHGHK (Higgs) mechanism, conformal gravity, gauge theory, spontaneous symmetry breaking, symmetry reduction, twistors and tractors
Procedia PDF Downloads 2371462 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia
Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay
Abstract:
Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.Keywords: AquaCrop model, calibration, validation, simulation
Procedia PDF Downloads 711461 Field Management Solutions Supporting Foreman Executive Tasks
Authors: Maroua Sbiti, Karim Beddiar, Djaoued Beladjine, Romuald Perrault
Abstract:
Productivity is decreasing in construction compared to the manufacturing industry. It seems that the sector is suffering from organizational problems and have low maturity regarding technological advances. High international competition due to the growing context of globalization, complex projects, and shorter deadlines increases these challenges. Field employees are more exposed to coordination problems than design officers. Execution collaboration is then a major issue that can threaten the cost, time, and quality completion of a project. Initially, this paper will try to identify field professional requirements as to address building management process weaknesses such as the unreliability of scheduling, the fickleness of monitoring and inspection processes, the inaccuracy of project’s indicators, inconsistency of building documents and the random logistic management. Subsequently, we will focus our attention on providing solutions to improve scheduling, inspection, and hours tracking processes using emerging lean tools and field mobility applications that bring new perspectives in terms of cooperation. They have shown a great ability to connect various field teams and make informations visual and accessible to planify accurately and eliminate at the source the potential defects. In addition to software as a service use, the adoption of the human resource module of the Enterprise Resource Planning system can allow a meticulous time accounting and thus make the faster decision making. The next step is to integrate external data sources received from or destined to design engineers, logisticians, and suppliers in a holistic system. Creating a monolithic system that consolidates planning, quality, procurement, and resources management modules should be our ultimate target to build the construction industry supply chain.Keywords: lean, last planner system, field mobility applications, construction productivity
Procedia PDF Downloads 1151460 Teaching Method for a Classroom of Students at Different Language Proficiency Levels: Content and Language Integrated Learning in a Japanese Culture Classroom
Authors: Yukiko Fujiwara
Abstract:
As a language learning methodology, Content and Language Integrated Learning (CLIL) has become increasingly prevalent in Japan. Most CLIL classroom practice and its research are conducted in EFL fields. However, much less research has been done in the Japanese language learning setting. Therefore, there are still many issues to work out using CLIL in the Japanese language teaching (JLT) setting. it is expected that more research will be conducted on both authentically and academically. Under such circumstances, this is one of the few classroom-based CLIL researches experiments in JLT and aims to find an effective course design for a class with students at different proficiency levels. The class was called ‘Japanese culture A’. This class was offered as one of the elective classes for International exchange students at a Japanese university. The Japanese proficiency level of the class was above the Japanese Language Proficiency Test Level N3. Since the CLIL approach places importance on ‘authenticity’, the class was designed with materials and activities; such as books, magazines, a film and TV show and a field trip to Kyoto. On the field trip, students experienced making traditional Japanese desserts, by receiving guidance directly from a Japanese artisan. Through the course, designated task sheets were used so the teacher could get feedback from each student to grasp what the class proficiency gap was. After reading an article on Japanese culture, students were asked to write down the words they did not understand and what they thought they needed to learn. It helped both students and teachers to set learning goals and work together for it. Using questionnaires and interviews with students, this research examined whether the attempt was effective or not. Essays they wrote in class were also analyzed. The results from the students were positive. They were motivated by learning authentic, natural Japanese, and they thrived setting their own personal goals. Some students were motivated to learn Japanese by studying the language and others were motivated by studying the cultural context. Most of them said they learned better this way; by setting their own Japanese language and culture goals. These results will provide teachers with new insight towards designing class materials and activities that support students in a multilevel CLIL class.Keywords: authenticity, CLIL, Japanese language and culture, multilevel class
Procedia PDF Downloads 2521459 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 5451458 The Red Persian Carpet: Iran as Semi-Periphery in China's Belt and Road Initiative-Bound World-System
Authors: Toufic Sarieddine
Abstract:
As the belt and road Initiative (henceforth, BRI) enters its 9th year, Iran and China are forging stronger ties on economic and military fronts, a development which has not only caused alarm in Washington but also risks staining China’s relationships with the oil-rich Gulf monarchies. World-systems theory has been used to examine the impact of the BRI on the current world order, with scholarship split on the capacity of China to emerge as a hegemon contending with the US or even usurping it. This paper argues the emergence of a new China-centered world-system comprised of states/areas and processes participating in the BRI and overlapping with the global world-system under (shaky) US hegemony. This world-system centers around China as core and hegemon via economic domination, capable new institutions (Shanghai Cooperation Council), legal modi operandi, the common goal of infrastructure development to rally support among developing states, and other indicators of hegemony outlined in world-systems theory. In this regard, while states like Pakistan could become peripheries to China in the BRI-bound world-system via large-scale projects such as the China-Pakistan Economic Corridor, Iran has greater capacities and influence in the Middle East, making it superior to a periphery. This paper thus argues that the increasing proximity between Iran and China sees the former becoming a semi-periphery with respect to China within the BRI-bound world-system, having economic dependence on its new core and hegemon while simultaneously wielding political and military influence on weaker states such as Iraq, Lebanon, Yemen, and Syria. The indicators for peripheralization as well as the characteristics of a semi-periphery outlined in world-systems theory are used to examine the current economic, political, and militaristic dimensions of Iran and China’s growing relationship, as well as the trajectory of these dimensions as part of the BRI-bound world-system.Keywords: belt and road initiative, China, China-Middle East relations, Iran, world-systems analysis
Procedia PDF Downloads 1551457 Post Harvest Losses and Food Security in Northeast Nigeria What Are the Key Challenges and Concrete Solutions
Authors: Adebola Adedugbe
Abstract:
The challenge of post-harvest losses poses serious threats for food security in Nigeria and the north-eastern part with the country losing about $9billion annually due to postharvest losses in the sector. Post-harvest loss (PHL) is the quantitative and qualitative loss of food in various post-harvest operations. In Nigeria, post-harvest losses (PHL) have been a major challenge to food security and improved farmer’s income. In 2022, the Nigerian government had said over 30 percent of food produced by Nigerian farmers perish during post-harvest. For many in northeast Nigeria, agriculture is the predominant source of livelihood and income. The persistent communal conflicts, flood, decade-old attacks by boko haram and insurgency in this region have disrupted farming activities drastically, with farmlands becoming insecure and inaccessible as communities are forced to abandon ancestral homes, The impact of climate change is also affecting agricultural and fishing activities, leading to shortage of food supplies, acute hunger and loss of livelihood. This has continued to impact negatively on the region and country’s food production and availability making it loose billions of US dollars annually in income in this sector. The root cause of postharvest losses among others in crops, livestock and fisheries are lack of modern post-harvest equipment, chemical and lack of technologies used for combating losses. The 2019 Global Hunger Index showed Nigeria’s case was progressing from a ‘serious to alarming level’. As part of measures to address the problem of post-harvest losses experienced by farmers, the federal government of Nigeria concessioned 17 silos with 6000 metric tonne storage space to private sector to enable farmers to have access to storage facilities. This paper discusses the causes, effects and solutions in handling post-harvest losses and optimize returns on food security in northeast Nigeria.Keywords: farmers, food security, northeast Nigeria, postharvest loss
Procedia PDF Downloads 721456 Statecraft: Building a Hindu Nationalist Intellectual Ecosystem in India
Authors: Anuradha Sajjanhar
Abstract:
The rise of authoritarian populist regimes has been accompanied by hardened nationalism and heightened divisions between 'us' and 'them'. Political actors reinforce these sentiments through coercion, but also through inciting fear about imagined threats and by transforming public discourse about policy concerns. Extremist ideas can penetrate national policy, as newly appointed intellectuals and 'experts' in knowledge-producing institutions, such as government committees, universities, and think tanks, succeed in transforming public discourse. While attacking left and liberal academics, universities, and the press, the current Indian government is building new institutions to provide authority to its particularly rigid, nationalist discourse. This paper examines the building of a Hindu-nationalist intellectual ecosystem in India, interrogating the key role of hyper-nationalist think tanks. While some are explicit about their political and ideological leanings, others claim neutrality and pursue their agenda through coded technocratic language and resonant historical narratives. Their key is to change thinking by normalizing it. Six years before winning the election in 2014, India’s Hindu-nationalist party, the BJP, put together its own network of elite policy experts. In a national newspaper, the vice-president of the BJP described this as an intentional shift: from 'being action-oriented to solidifying its ideological underpinnings in a policy framework'. When the BJP came to power in 2014, 'experts' from these think tanks filled key positions in the central government. The BJP has since been circulating dominant ideas of Hindu supremacy through regional parties, grassroots political organisations, and civil society organisations. These think tanks have the authority to articulate and legitimate Hindu nationalism within a credible technocratic policy framework. This paper is based on ethnography and over 50 interviews in New Delhi, before and after the BJP’s staggering election victory in 2019. It outlines the party’s attempt to take over existing institutions while developing its own cadre of nationalist policy-making professionals.Keywords: ideology, politics, South Asia, technocracy
Procedia PDF Downloads 1201455 Design and Synthesis of Fully Benzoxazine-Based Porous Organic Polymer Through Sonogashira Coupling Reaction for CO₂ Capture and Energy Storage Application
Authors: Mohsin Ejaz, Shiao-Wei Kuo
Abstract:
The growing production and exploitation of fossil fuels have placed human society in serious environmental issues. As a result, it's critical to design efficient and eco-friendly energy production and storage techniques. Porous organic polymers (POPs) are multi-dimensional porous network materials developed through the formation of covalent bonds between different organic building blocks that possess distinct geometries and topologies. POPs have tunable porosities and high surface area making them a good candidate for an effective electrode material in energy storage applications. Herein, we prepared a fully benzoxazine-based porous organic polymers (TPA–DHTP–BZ POP) through sonogashira coupling of dihydroxyterephthalaldehyde (DHPT) and triphenylamine (TPA) containing benzoxazine (BZ) monomers. Firstly, both BZ monomers (TPA-BZ-Br and DHTP-BZ-Ea) were synthesized by three steps, including Schiff base, reduction, and mannich condensation reaction. Finally, the TPA–DHTP–BZ POP was prepared through the sonogashira coupling reaction of brominated monomer (TPA-BZ-Br) and ethynyl monomer (DHTP-BZ-Ea). Fourier transform infrared (FTIR) and solid-state nuclear magnetic resonance (NMR) spectroscopy confirmed the successful synthesis of monomers as well as POP. The porosity of TPA–DHTP–BZ POP was investigated by the N₂ absorption technique and showed a Brunauer–Emmett–Teller (BET) surface area of 196 m² g−¹, pore size 2.13 nm and pore volume of 0.54 cm³ g−¹, respectively. The TPA–DHTP–BZ POP experienced thermal ring-opening polymerization, resulting in poly (TPA–DHTP–BZ) POP having strong inter and intramolecular hydrogen bonds formed by phenolic groups and Mannich bridges, thereby enhancing CO₂ capture and supercapacitive performance. The poly(TPA–DHTP–BZ) POP demonstrated a remarkable CO₂ capture of 3.28 mmol g−¹ and a specific capacitance of 67 F g−¹ at 0.5 A g−¹. Thus, poly(TPA–DHTP–BZ) POP could potentially be used for energy storage and CO₂ capture applications.Keywords: porous organic polymer, benzoxazine, sonogashira coupling, CO₂, supercapacitor
Procedia PDF Downloads 731454 Translanguaging as a Decolonial Move in South African Bilingual Classrooms
Authors: Malephole Philomena Sefotho
Abstract:
Nowadays, it is a fact that the majority of people, worldwide, are bilingual rather than monolingual due to the surge of globalisation and mobility. Consequently, bilingual education is a topical issue of discussion among researchers. Several studies that have focussed on it have highlighted the importance and need for incorporating learners’ linguistic repertoires in multilingual classrooms and move away from the colonial approach which is a monolingual bias – one language at a time. Researchers pointed out that a systematic approach that involves the concurrent use of languages and not a separation of languages must be implemented in bilingual classroom settings. Translanguaging emerged as a systematic approach that assists learners to make meaning of their world and it involves allowing learners to utilize all their linguistic resources in their classrooms. The South African language policy also room for diverse languages use in bi/multilingual classrooms. This study, therefore, sought to explore how teachers apply translanguaging in bilingual classrooms in incorporating learners’ linguistic repertoires. It further establishes teachers’ perspectives in the use of more than one language in teaching and learning. The participants for this study were language teachers who teach at bilingual primary schools in Johannesburg in South Africa. Semi-structured interviews were conducted to establish their perceptions on the concurrent use of languages. Qualitative research design was followed in analysing data. The findings showed that teachers were reluctant to allow translanguaging to take place in their classrooms even though they realise the importance thereof. Not allowing bilingual learners to use their linguistic repertoires has resulted in learners’ negative attitude towards their languages and contributed in learners’ loss of their identity. This article, thus recommends a drastic change to decolonised approaches in teaching and learning in multilingual settings and translanguaging as a decolonial move where learners are allowed to translanguage freely in their classroom settings for better comprehension and making meaning of concepts and/or related ideas. It further proposes continuous conversations be encouraged to bring eminent cultural and linguistic genocide to a halt.Keywords: bilingualism, decolonisation, linguistic repertoires, translanguaging
Procedia PDF Downloads 1791453 Risk Assessment on Construction Management with “Fuzzy Logy“
Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi
Abstract:
Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.Keywords: fuzzy logy, risk, prioritization, assessment
Procedia PDF Downloads 5941452 CICAP: Promising Wound Healing Gel from Bee Products and Medicinal Plants
Authors: Laïd Boukraâ
Abstract:
Complementary and Alternative Medicine is an inclusive term that describes treatments, therapies, and modalities that are not accepted as components of mainstream education or practice, but that are performed on patients by some practitioners. While these treatments and therapies often form part of post-graduate education, study and writing, they are generally viewed as alternatives or complementary to more universally accepted treatments. Ancient civilizations used bee products and medicinal plants, but modern civilization and ‘education’ have seriously lessened our natural instinctive ability and capability. Despite the fact that the modern Western establishment appears to like to relegate apitherapy and aromatherapy to the status of 'folklore' or 'old wives' tales', they contain a vast spread of pharmacologically-active ingredients and each one has its own unique combination and properties. They are classified in modern herbal medicine according to their spheres of action. Bee products and medicinal plants are well-known natural product for their healing properties and their increasing popularity recently as they are widely used in wound healing. Honey not only has antibacterial properties which can help as an antibacterial agent but also has chemical properties which may further help in the wound healing process. A formulation with honey as its main component was produced into a honey gel. This new formulation has enhanced texture and is more user friendly for usage as well. This new formulation would be better than other formulas as it is hundred percent consisting of natural products and has been made into a better formulation. In vitro assay, animal model study and clinical trials have shown the effectiveness of LEADERMAX for the treatment of diabetic foot, burns, leg ulcer and bed sores. This one hundred percent natural product could be the best alternative to conventional products for wound and burn management. The advantages of the formulation are: 100% natural, affordable, easy to use, strong power of absorption, dry surface on the wound making a film, will not stick to the wound bed; helps relieve wound pain, inflammation, edema and bruising while improving comfort.Keywords: bed sore bee products, burns, diabetic foot, medicinal plants, leg ulcer, wounds
Procedia PDF Downloads 3371451 Brain-Computer Interfaces That Use Electroencephalography
Authors: Arda Ozkurt, Ozlem Bozkurt
Abstract:
Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.Keywords: BCI, EEG, non-invasive, spatial resolution
Procedia PDF Downloads 711450 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process
Authors: Chenhao Zhu
Abstract:
Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper
Procedia PDF Downloads 1421449 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 2011448 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 531447 Hybrid Reusable Launch Vehicle for Space Application A Naval Approach
Authors: Rajasekar Elangopandian, Anand Shanmugam
Abstract:
In order to reduce the cost of launching satellite and payloads to the orbit this project envisages some immense combined technology. This new technology in space odyssey contains literally four concepts. The first mode in this innovation is flight mission characteristics which, says how the mission will induct. The conventional technique of magnetic levitation will help us to produce the initial thrust. The name states reusable launch vehicle shows its viability of reuseness. The flight consists miniature rocket which produces the required thrust and the two JATO (jet assisted takeoff) boosters which gives the initial boost for the vehicle. The vehicle ostensibly looks like an airplane design and will be located on the super conducting rail track. When the high power electric current given to the rail track, the vehicle starts floating as per the principle of magnetic levitation. If the flight reaches the particular takeoff distance the two boosters gets starts and will give the 48KN thrust each. Obviously it`ll follow the vertical path up to the atmosphere end/start to space. As soon as it gets its speed the two boosters will cutoff. Once it reaches the space the inbuilt spacecraft keep the satellite in the desired orbit. When the work finishes, the apogee motors gives the initial kick to the vehicle to come in to the earth’s atmosphere with 22N thrust and automatically comes to the ground by following the free fall, the help of gravitational force. After the flying region it makes the spiral flight mode then gets landing where the super conducting levitated rail track located. It will catch up the vehicle and keep it by changing the poles of magnets and varying the current. Initial cost for making this vehicle might be high but for the frequent usage this will reduce the launch cost exactly half than the now-a-days technology. The incorporation of such a mechanism gives `hybrid` and the reusability gives `reusable launch vehicle` and ultimately Hybrid reusable launch vehicle.Keywords: the two JATO (jet assisted takeoff) boosters, magnetic levitation, 48KN thrust each, 22N thrust and automatically comes to the ground
Procedia PDF Downloads 4271446 The Functional Roles of Right Dorsolateral Prefrontal Cortex and Ventromedial Prefrontal Cortex in Risk-Taking Behavior
Authors: Aline M. Dantas, Alexander T. Sack, Elisabeth Bruggen, Peiran Jiao, Teresa Schuhmann
Abstract:
Risk-taking behavior has been associated with the activity of specific prefrontal regions of the brain, namely the right dorsolateral prefrontal cortex (DLPFC) and the ventromedial prefrontal cortex (VMPFC). While the deactivation of the rDLPFC has been shown to lead to increased risk-taking behavior, the functional relationship between VMPFC activity and risk-taking behavior is yet to be clarified. Correlational evidence suggests that the VMPFC is involved in valuation processes that involve risky choices, but evidence on the functional relationship is lacking. Therefore, this study uses brain stimulation to investigate the role of the VMPFC during risk-taking behavior and replicate the current findings regarding the role of the rDLPFC in this same phenomenon. We used continuous theta-burst stimulation (cTBS) to inhibit either the VMPFC or DLPFC during the execution of the computerized Maastricht Gambling Task (MGT) in a within-subject design with 30 participants. We analyzed the effects of such stimulation on risk-taking behavior, participants’ choices of probabilities and average values, and response time. We hypothesized that, compared to sham stimulation, VMPFC inhibition leads to a reduction in risk-taking behavior by reducing the appeal to higher-value options and, consequently, the attractiveness of riskier options. Right DLPFC (rDLPFC) inhibition, on the other hand, should lead to an increase in risk-taking due to a reduction in cognitive control, confirming existent findings. Stimulation of both the rDLPFC and the VMPFC led to an increase in risk-taking behavior and an increase in the average value chosen after both rDLPFC and VMPFC stimulation compared to sham. No significant effect on chosen probabilities was found. A significant increase in response time was observed exclusively after rDLPFC stimulation. Our results indicate that inhibiting DLPFC and VMPFC separately leads to similar effects, increasing both risk-taking behavior and average value choices, which is likely due to the strong anatomical and functional interconnection of the VMPFC and rDLPFC.Keywords: decision-making, risk-taking behavior, brain stimulation, TMS
Procedia PDF Downloads 1061445 Rooftop Rainwater Harvesting for Sustainable Organic Farming: Insights from Smart cities in India
Authors: Rajkumar Ghosh
Abstract:
India faces a critical task of water shortage, specifically during dry seasons, which adversely impacts agricultural productivity and food protection. Natural farming, specializing in sustainable practices, demands green water management in smart cities in India. This paper examines how rooftop rainwater harvesting (RRWH) can alleviate water scarcity and support sustainable organic farming practices in India. RRWH emerges as a promising way to increase water availability for the duration of dry intervals and decrease reliance on traditional water sources in smart cities. The look at explores the capacity of RRWH to enhance water use performance, help crop growth, enhance soil health, and promote ecological stability inside the farming ecosystem. The medical paper delves into the advantages, challenges, and implementation techniques of RRWH in organic farming. It addresses demanding situations, including seasonal variability of rainfall, limited rooftop vicinity, and monetary concerns. Moreover, it analyses broader environmental and socio-economic implications of RRWH for sustainable agriculture, emphasizing water conservation, biodiversity protection, and the social properly-being of farming communities. The belief underscores the importance of RRWH as a sustainable solution for reaching the aim of sustainable agriculture in natural farming in India. It emphasizes the want for further studies, policy advocacy, and capacity-building initiatives to promote RRWH adoption and assist the transformation in the direction of sustainable organic farming systems. The paper proposes adaptive strategies to triumph over demanding situations and optimize the advantages of RRWH in organic farming. By way of doing so, India can make vast development in addressing water scarcity issues and making sure a greater resilient and sustainable agricultural future in smart cities.Keywords: rooftop rainwater harvesting, organic farming, green water management, food protection, ecological stabilty
Procedia PDF Downloads 1021444 Rethinking the Smartness for Sustainable Development Through the Relationship between Public and Private Actors
Authors: Selin Tosun
Abstract:
The improvements in technology have started to transform the way we live, work, play, and commute in our cities. The emerging smart city understanding has been paving the way for more efficient, more useful, and more profitable cities. Smart sensors, smart lighting, smart waste, water and electricity management, smart transportation and communication systems are introduced to cities at a rapid pace. In today's world, innovation is often correlated with start-up companies and technological pioneers seeking broader economic objectives such as production and competitiveness. The government's position is primarily that of an enabler, with creativity mostly coming from the private sector. The paper argues that to achieve sustainable development, the ways in which smart and sustainable city approaches are being applied to cities need to be redefined. The research aims to address common discussions in the discourse of smart and sustainable cities criticizing the priority of lifestyle sterilization over human-centered sustainable interventions and social innovation strategies. The dichotomy between the fact that smart cities are mostly motivated by the competitive global market and the fact that the delocalization is, in fact, their biggest problem in the way of becoming authentic, sustainable cities is the main challenge that we face today. In other words, the key actors in smart cities have different and somewhat conflicting interests and demands. By reviewing the roles of the public and private actors in smart city making, the paper aspires to reconceptualize the understanding of “smartness” in achieving sustainable development in which the “smartness” is understood as a multi-layered complex phenomenon that can be channeled through different dynamics. The case cities around the world are explored and compared in terms of their technological innovations, governance and policy innovations, public-private stakeholder relationships, and the understanding of the public realm. The study aims to understand the current trends and general dynamics in the field, key issues that are being addressed, the scale that is preferred to reflect upon and the projects that are designed for the particular issues.Keywords: smart city, sustainable development, technological innovation, social innovation
Procedia PDF Downloads 1961443 Exploring Emerging Viruses From a Protected Reserve
Authors: Nemat Sokhandan Bashir
Abstract:
Threats from viruses to agricultural crops could be even larger than the losses caused by the other pathogens because, in many cases, the viral infection is latent but crucial from an epidemic point of view. Wild vegetation can be a source of many viruses that eventually find their destiny in crop plants. Although often asymptomatic in wild plants due to adaptation, they can potentially cause serious losses in crops. Therefore, exploring viruses in wild vegetation is very important. Recently, omics have been quite useful for exploring plant viruses from various plant sources, especially wild vegetation. For instance, we have discovered viruses such as Ambrossia asymptomatic virus I (AAV-1) through the application of metagenomics from Oklahoma Prairie Reserve. Accordingly, extracts from randomly-sampled plants are subjected to high speed and ultracentrifugation to separated virus-like particles (VLP), then nucleic acids in the form of DNA or RNA are extracted from such VLPs by treatment with phenol—chloroform and subsequent precipitation by ethanol. The nucleic acid preparations are separately treated with RNAse or DNAse in order to determine the genome component of VLPs. In the case of RNAs, the complementary cDNAs are synthesized before submitting to DNA sequencing. However, for VLPs with DNA contents, the procedure would be relatively straightforward without making cDNA. Because the length of the nucleic acid content of VPLs can be different, various strategies are employed to achieve sequencing. Techniques similar to so-called "chromosome walking" may be used to achieve sequences of long segments. When the nucleotide sequence data were obtained, they were subjected to BLAST analysis to determine the most related previously reported virus sequences. In one case, we determined that the novel virus was AAV-l because the sequence comparison and analysis revealed that the reads were the closest to the Indian citrus ringspot virus (ICRSV). AAV—l had an RNA genome with 7408 nucleotides in length and contained six open reading frames (ORFs). Based on phylogenies inferred from the replicase and coat protein ORFs of the virus, it was placed in the genus Mandarivirus.Keywords: wild, plant, novel, metagenomics
Procedia PDF Downloads 801442 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network
Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour
Abstract:
Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network
Procedia PDF Downloads 1691441 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution
Authors: S. Jayasinghe, R. B. N. Dissanayake
Abstract:
Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.Keywords: mathematical model, network optimization, linear programming
Procedia PDF Downloads 3461440 Effective Coaching for Teachers of English Language Learners: A Gap Analysis Framework
Authors: Armando T. Zúñiga
Abstract:
As the number of English Language Learners (ELLs) in public schools continues to grow, so does the achievement gap between ELLs and other student populations. In an effort to support classroom teachers with effective instructional strategies for this student population, many districts have created instructional coaching positions specifically to support classroom teachers of ELLs—ELL Teachers on Special Assignment (ELL TOSAs). This study employed a gap analysis framework to the ELL TOSA professional support program in one California school district to examine knowledge, motivation, and organizational influences (KMO) on the ELL TOSAs’ goal of supporting classroom teachers of ELLs. Three themes emerged as a result of data analysis. First, there was evidence to illustrate the interaction between knowledge and the organization. Data from ELL TOSAs indicated an understanding of the role that collaboration plays in coaching and how to operationalize it in their support of teachers. Further, all of the ELL TOSAs indicated they have received professional development on effective strategies for instructional coaching. Additionally, a large percentage of the ELL TOSAs indicated a knowledge of modeling as an effective coaching practice. Accordingly, all of the ELL TOSAs indicated that they had knowledge of feedback as an effective coaching strategy. However, there was not sufficient evidence to support that they learned the latter two strategies through professional development. A second theme surfaced as there was evidence to illustrate an interaction between motivation and the organization. Some ELL TOSAs indicated that their sense of self-efficacy was affected by conflicting roles and expectations for the job. Most of the ELL TOSAs indicated that their sense of self-efficacy was affected by an increased workload brought about by fiscal decision making. Finally, there was evidence illustrating the interaction between the organization and motivation. The majority of the of ELL TOSAs indicated that there is confusion about how their roles are perceived, leaving the ELL TOSAs to feel that their actions did not contribute to instructional change. In conclusion, five research-based recommendations to support ELL TOSA goal attainment and considerations for future research on instructional coaches for classroom teachers of ELLs are provided.Keywords: English language development, English language acquisition, language and leadership, language coaching, English language learners, second language acquisition
Procedia PDF Downloads 1011439 Local and Global Sustainability: the Case-Study of Beja Municipality Local Agenda 21 Operationalization Challenges
Authors: Maria Inês Faria, João Miguel Simão
Abstract:
Frequently, the Sustainable Development paradigm is considered the contemporary societies flag and is has been assuming different nuances on local and global dialogues. This reveals the ambivalent character associated to its implementation due, namely, to the kind of synergies that political institutions, social organizations and citizenry can actually create. The Sustainable Development concept needs further discussion so that it can be useful in decision-making processes. In fact, the polysemic nature of this concept has consistently undermined its credibility leading, among other factors, to the talk and action gap, as well as to misappropriations of this notion. The present study focuses on the importance in questioning the sustainable development operationalization, "To walk the talk", and intends, in a broad sense, identify prospects and the elements of sustainability that are included in strategic plans (global, national and local) and, in the strict sense, confront discourse and practice in the context of local public policies for sustainable development, in particular with regard to the implementation of Local Agenda 21 in the municipality of Beja (Portugal) in order to analyze at what extent the strategies adopted and implemented are aligned with the paradigm of sustainable development. The method is based on critical analysis of literature and official documentation, using three complementary approaches: a) exploratory review of literature in order to identify publications on sustainability and sustainable development; b) this second approach complements the first, focused on the official documentation for the adoption and implementation of sustainable development, which is produced in the global plan, regional, national and local levels; c) and the approach which is focused on official documentation that expresses the policy options, the strategic lines and actions for sustainable development implementation Beja´s Municipality. The main results of this study highlight the type of alignment of the Beja´s Municipality sustainable policies, concerning the officially stipulated for the promotion of sustainable development on the international agenda, stressing the potentialities, constraints and challenges of Agenda 21 Local implementation.Keywords: sustainable development, Local Agenda 21, sustainable local public policies, Beja
Procedia PDF Downloads 279