Search results for: alternative space exhibiting
339 Socio-Economic Transformation of Barpak Post-Earthquake Reconstruction
Authors: Sudikshya Bhandari, Jonathan K. London
Abstract:
The earthquake of April 2015 was one of the biggest disasters in the history of Nepal. The epicenter was located near Barpak, north of the Gorkha district. Before the disaster, this settlement was a compact and homogeneous settlement manifesting its uniqueness through the social and cultural activities, and a distinct vernacular architecture. Narrow alleys with stone paved streets, buildings with slate roofs, and common spaces between the houses made this settlement socially, culturally, and environmentally cohesive. With the presence of micro hydro power plants, local economic activities enabled the local community to exist and thrive. Agriculture and animal rearing are the sources of livelihood for the majority of families, along with the booming homestays (where local people welcome guests to their home, as a business) and local shops. Most of these activities are difficult to find as the houses have been destroyed with the earthquake and the process of reconstruction has been transforming the outlook of the settlement. This study characterized the drastic transformation in Barpak post-earthquake, and analyzed the consequences of the reconstruction process. In addition, it contributes to comprehending a broader representation about unsustainability created by the lack of contextual post-disaster development. Since the research is based in a specific area, a case study approach was used. Sample houses were selected on the basis of ethnicity and house typology. Mixed methods such as key informant and semi structured interviews, focus groups, observations and photographs are used for the collection of data. The research focus is predominantly on the physical change of the house typology from vernacular to externally adopted designs. This transformation of the house entails socio-cultural changes such as social fragmentation with differences among the rich and the poor and decreases in the social connectivity within families and neighborhood. Families have found that new houses require more maintenance and resources that have increased their economic expenses. The study also found that the reconstructed houses are not thermally comfortable in the cold climate of Barpak, leading to the increased use of different sources of heating like electric heaters and more firewood. Lack of storage spaces for crops and livestock have discouraged them to pursue traditional means of livelihood and depend more on buying food from stores, ultimately making it less economical for most of the families. The transformation of space leading to the economic, social and cultural changes demonstrates the unsustainability of Barpak. Conclusions from the study suggest place based and inclusive planning and policy formations that include locals as partners, identifying the possible ways to minimize the impact and implement these recommendations into the future policy and planning scenarios.Keywords: earthquake, Nepal, reconstruction, settlement, transformation
Procedia PDF Downloads 118338 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases
Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner
Abstract:
Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations
Procedia PDF Downloads 117337 Biodegradable Cross-Linked Composite Hydrogels Enriched with Small Molecule for Osteochondral Regeneration
Authors: Elena I. Oprita, Oana Craciunescu, Rodica Tatia, Teodora Ciucan, Reka Barabas, Orsolya Raduly, Anca Oancea
Abstract:
Healing of osteochondral defects requires repair of the damaged articular cartilage, the underlying subchondral bone and the interface between these tissues (the functional calcified layer). For this purpose, developing a single monophasic scaffold that can regenerate two specific lineages (cartilage and bone) becomes a challenge. The aim of this work was to develop variants of biodegradable cross-linked composite hydrogel based on natural polypeptides (gelatin), polysaccharides components (chondroitin-4-sulphate and hyaluronic acid), in a ratio of 2:0.08:0.02 (w/w/w) and mixed with Si-hydroxyapatite (Si-Hap), in two ratios of 1:1 and 2:1 (w/w). Si-Hap was synthesized and characterized as a better alternative to conventional Hap. Subsequently, both composite hydrogel variants were cross-linked with (N, N-(3-dimethylaminopropyl)-N-ethyl carbodiimide (EDC) and enriched with a small bioactive molecule (icariin). The small molecule icariin (Ica) (C33H40O15) is the main active constituent (flavonoid) of Herba epimedium used in traditional Chinese medicine to cure bone- and cartilage-related disorders. Ica enhances osteogenic and chondrogenic differentiation of bone marrow mesenchymal stem cells (BMSCs), facilitates matrix calcification and increases the specific extracellular matrix (ECM) components synthesis by chondrocytes. Afterward, the composite hydrogels were characterized for their physicochemical properties in terms of the enzymatic biodegradation in the presence of type I collagenase and trypsin, the swelling capacity and the degree of crosslinking (TNBS assay). The cumulative release of Ica and real-time concentration were quantified at predetermined periods of time, according to the standard curve of standard Ica, after hydrogels incubation in saline buffer at physiological parameters. The obtained cross-linked composite hydrogels enriched with small-molecule Ica were also characterized for morphology by scanning electron microscopy (SEM). Their cytocompatibility was evaluated according to EN ISO 10993-5:2009 standard for medical device testing. Thus, analyses regarding cell viability (Live/Dead assay), cell proliferation (Neutral Red assay) and cell adhesion to composite hydrogels (SEM) were performed using NCTC clone L929 cell line. The final results showed that both cross-linked composite hydrogel variants enriched with Ica presented optimal physicochemical, structural and biological properties to be used as a natural scaffold able to repair osteochondral defects. The data did not reveal any toxicity of composite hydrogels in NCTC stabilized cell lines within the tested range of concentrations. Moreover, cells were capable of spreading and proliferating on both composite hydrogel surfaces. In conclusion, the designed biodegradable cross-linked composites enriched with Si and Ica are recommended for further testing as natural temporary scaffolds, which can allow cell migration and synthesis of new extracellular matrix within osteochondral defects.Keywords: composites, gelatin, osteochondral defect, small molecule
Procedia PDF Downloads 175336 Saudi State Arabia’s Struggle for a Post-Rentier Regional Order
Authors: Omair Anas
Abstract:
The Persian Gulf has been in turmoil for a long time since the colonial administration has handed over the role to the small and weak kings and emirs who were assured of protection in return of many economic and security promises to them. The regional order, Saudi Arabia evolved was a rentier regional order secured by an expansion of rentier economy and taking responsibility for much of the expenses of the regional order on behalf of relatively poor countries. The two oil booms helped the Saudi state to expand the 'rentier order' driven stability and bring the countries like Egypt, Jordan, Syria, and Palestine under its tutelage. The disruptive misadventure, however, came with Iran's proclamation of the Islamic Revolution in 1979 which it wanted to be exported to its 'un-Islamic and American puppet' Arab neighbours. For Saudi Arabia, even the challenge presented by the socialist-nationalist Arab dictators like Gamal Abdul Nasser and Hafez Al-Assad was not that much threatening to the Saudi Arabia’s then-defensive realism. In the Arab uprisings, the Gulf monarchies saw a wave of insecurity and Iran found it an opportune time to complete the revolutionary process it could not complete after 1979. An alliance of convenience and ideology between Iran and Islamist groups had the real potential to challenge both Saudi Arabia’s own security and its leadership in the region. The disruptive threat appeared at a time when the Saudi state had already sensed an impending crisis originating from the shifts in the energy markets. Low energy prices, declining global demands, and huge investments in alternative energy resources required Saudi Arabia to rationalize its economy according to changing the global political economy. The domestic Saudi reforms remained gradual until the death of King Abdullah in 2015. What is happening now in the region, the Qatar crisis, the Lebanon crisis and the Saudi-Iranian proxy war in Iraq, Syria, and Yemen has combined three immediate objectives, rationalising Saudi economy and most importantly, the resetting the Saudi royal power for Saudi Arabia’s longest-serving future King Mohammad bin Salman. The Saudi King perhaps has no time to wait and watch the power vacuum appearing because of Iran’s expansionist foreign policy. The Saudis appear to be employing an offensive realism by advancing a pro-active regional policy to counter Iran’s threatening influence amid disappearing Western security from the region. As the Syrian civil war is coming to a compromised end with ceding much ground to Iran-controlled militias, Hezbollah and Al-Hashad, the Saudi state has lost much ground in these years and the threat from Iranian proxies is more than a reality, more clearly in Bahrain, Iraq, Syria, and Yemen. This paper attempts to analyse the changing Saudi behaviour in the region, which, the author understands, is shaped by an offensive-realist approach towards finding a favourable security environment for the Saudi-led regional order, a post-rentier order perhaps.Keywords: terrorism, Saudi Arabia, Rentier State, gulf crisis
Procedia PDF Downloads 136335 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging
Authors: Abdou Elhendy
Abstract:
Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis
Procedia PDF Downloads 168334 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 193333 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 99332 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics
Authors: Jingsi Li, Neil S. Ferguson
Abstract:
Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management
Procedia PDF Downloads 112331 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling
Authors: Alastair Hales, Xi Jiang
Abstract:
Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics
Procedia PDF Downloads 221330 Hiveopolis - Honey Harvester System
Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios
Abstract:
Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.Keywords: honey harvesting, honeybee, hiveopolis, nitinol
Procedia PDF Downloads 108329 Beneath the Leisurely Surface: An Analysis of the Piano Lesson Frenzy among Chinese Middle-Class Parents
Authors: Yijie Wang, Tianyue Wang
Abstract:
In the past two decades, there has been a great ‘piano lesson frenzy’ among Chinese middle-class families, with a large number of parents adding piano training to children’s extra-curriculum lists. Superficially, the frenzy reflects a rather ‘leisurely’ attitude: parents typically claim that pianos lessons are ‘just for fun’ and will hopefully render children’s life more exciting. However, a closer scrutiny reveals that there is great social-status anxiety hidden beneath this ‘leisurely’ surface. Based on pre-interviews of six Chinese middle-class parents who have enthusiastically signed their children up for piano lessons, several tentative analysis are made: 1. Owing to a series of historical and social factors, the Chinese middle-class have yet to establish their cultural norms in the past few decades, resulting in great confusion concerning how to cultivate cultural tastes in their offspring. And partly due to the fact that the middle-class status of the past Chinese generation is mostly self-acquired rather than inherited, parents are much less confident about their cultural resources—which require long-time accumulation—than material ones. Both factors combine to lead to a sort of blind, overcompensating enthusiasm in culture-related education, and the piano frenzy is but a demonstration. 2. The piano has been chosen to be the object of the frenzy partly because of its inherent characteristics as well as socially-constructed ones. Costly, large in size, imported from another culture and so forth, the piano has acquired the meaning of being exclusive, high-end and exotic, which renders it a token of top-tier status among Chinese people, and piano lessons for offspring have therefore become parents’ paths towards a kind of ‘symbolic elevation’. A child playing piano is an exhibition as well as psychological assurance of the families’ middle-class status. 3. A closer look at children’s piano training process reveals that there is much more anxiety than leisurely elements involved. Despite parents’ claim that ‘piano is mainly for kids to have fun,’ the whole process is evidently of a rather ‘ascetic’ nature, with the demands of diligence and senses of time urgency throughout, and techniques rather than flair or styles are emphasized. This either means that the apparent ‘piano-for-fun’ stance is unauthentic and is only other motives in disguise, or that the Chinese middle-class parents are not yet capable of shaking off the sense of anxiety even if they sincerely intend to. 4. When viewed in relation to Chinese formal school system as well as the job market at large, it can be said that by signing children up for piano lessons, parents are consciously or unconsciously seeking to prepare for, or reduce the risks of, their children’s future social mobility. In face of possible failures in the highly-crucial, highly-competitive formal school system, piano-playing as an extra-curriculum activity may be conveniently transferred into an alternative career path. Besides, in contemporary China, as the occupational structure goes through change, and the school-related certificates decline in value, aspects such as a person’s overall deportment, which can be gained or proved by piano-learning, have been gaining in significance.Keywords: extra-curriculum activities, middle class, piano lesson frenzy, status anxiety
Procedia PDF Downloads 245328 Searching Knowledge for Engagement in a Worker Cooperative Society: A Proposal for Rethinking Premises
Authors: Soumya Rajan
Abstract:
While delving into the heart of any organization, the structural pre-requisites which form the framework of its system, allures and sometimes invokes great interest. In an attempt to understand the ecosystem of Knowledge that existed in organizations with diverse ownership and legal blueprints, Cooperative Societies, which form a crucial part of the neo-liberal movement in India, was studied. The exploration surprisingly led to the re-designing of at least a set of premises of the researcher on the drivers of engagement in an otherwise structured trade environment. The liberal organizational structure of Cooperative Societies has been empowered with certain terminologies: Voluntary, Democratic, Equality and Distributive Justice. To condense in Hubert Calvert’ words, ‘Co-operation is a form of organization wherein persons voluntarily associated together as human beings on the basis of equality for the promotion of the economic interest of themselves.’ In India, largely the institutions which work under this principle is registered under Cooperative Societies Act of the Central or State laws. A Worker Cooperative Society which originated as a movement in the state of Kerala and spread its wings across the country - Indian Coffee House was chosen as the enterprise for further inquiry for it being a living example and a highly successful working model in the designated space. The exploratory study reached out to employees and key stakeholders of Indian Coffee House to understand the nuances of the structure and the scope it provides for engagement. The key questions which formed shape in the mind of researcher while engaging in the inquiry were: How has the organization sustained despite its principle of accepting employees with no skills into employment and later training and empowering them? How can a system which has pre-independence and post-independence (independence here means the colonial independence from Great Britain) existence seek to engage employees within the premise of equality? How was the value of socialism ingrained in a commercial enterprise which has a turnover of several hundreds of Crores each year? How did the vision of a flat structure, way back in the 1940’s find its way into the organizational structure and has continued to remain as the way of life? These questions were addressed by the Case study research that ensued and placing Knowledge as the key premise, the possibilities of engagement of the organization man was pictured. Understanding that although the macro or holistic unit of analysis is the organization, it is pivotal to understand the structures and processes which best reflect on the actors. The embedded design which was adopted in this study delivered insights from the different stakeholder actors from diverse departments. While moving through variables which define and sometimes defy bounds in rationality, the study brought to light the inherent features of the organization structure and how it influences the actors who form a crucial part of the scheme of things. The research brought forth the key enablers for engagement and specifically explored the standpoint of knowledge in the larger structure of the Cooperative Society.Keywords: knowledge, organizational structure, engagement, worker cooperative
Procedia PDF Downloads 236327 Religious Discourses and Their Impact on Regional and Global Geopolitics: A Study of Deobandi in India, Pakistan and Afghanistan
Authors: Soumya Awasthi
Abstract:
The spread of radical ideology is possible not merely through public meetings, protests, and mosques but even in schools, seminaries, and madrasas. The rhetoric created around the relationship between religion and conflict has been the primary factor for instigating global conflicts – when religion is used to achieve broader objectives. There have been numerous cases of religion-driven conflict around the world be it the Jewish revolt between 66 AD and 628 AD or the 1119 AD the Crusades revolt or during the Cold War period or the rise of right-wing politics in India. Some of the major developments which reiterate the significance of religion in the contemporary times include: (1) The emergence of theocracy in Iran in 1979 (2) Resurgence of world-wide religious beliefs in post-Soviet space. (3) Emergence of transnational terrorism shaped by twisted depiction of Islam by the self proclaimed protectors of the religion. Therefore this paper is premised in the argument that religion has always found itself on the periphery of the discipline of International Relations (IR), and has received less attention than it deserves. The focus of the topic is on the discourses of ‘Deobandi’ and its impact both on the geopolitics of the region- particularly in India, Pakistan, and Afghanistan- and also at the global level. Discourse is a mechanism in use since time immemorial and has been a key tool to mobilise masses against the ruling authority. With the help of field surveys, qualitative and analytical method of research in religion and international relations, it has been found that they are numerous madrassas that are running illegally and are unregistered. These seminaries are operating in the Khyber-Pakhtunkhwa and Federally Administered Tribal Area (FATA). During the Soviet invasion of Afghanistan in 1979, relation between religion and geopolitics was highlighted when there was a sudden spread of radical ideas, finding support from countries like Saudi Arabia (who funded the campaign) and Pakistan (which organised the Saudi funds and set up training camps, both educational and military). During this period there was a huge influence of Wahabi theology on the madrasas which started with Deoband philosophy and later became a mix of Wahabi (influenced by Ahmad Ibn Hannabal and Ibn Taimmiya) and Deobandi philosophy, tending towards fundamentalism. Later the impact of regional geopolitics had influence on the global geopolitics when the incidents like attack on the US in 2001, bomb blasts in U.K, Indonesia, Turkey, and Israel in 2000s. In the midst of all this, there were several scholars who pointed towards Deobandi Philosophy as one of the drivers in the creation of armed Islamic groups in Pakistan, Afghanistan. Hence this paper will make an attempt to understand the trend as to how Deobandi religious discourses originating from India have changed over the decades, and who the agents of such changes are. It will throw light on Deoband from pre-independence till date to create a narrative around the religious discourses and Deobandi philosophy and its spill over impact on the map of global and regional security.Keywords: Deobandi School of Thought, radicalization, regional and global geopolitics, religious discourses, Whabi movement
Procedia PDF Downloads 217326 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a CriticalEthical English Pedagogy
Authors: Farah Vierra
Abstract:
Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth”, defined by Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those who these truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential in providing marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate has been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of which is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to devise instructional strategies that not only teach students the ways they can critically and ethically scrutinise truth claims but also teach them to mediate the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In hopes of providing educators with the roadmap to do so, this paper will first examine the challenges that confront students as a result of post-truth. Following which, the paper will elucidate the role English education can play in helping students overcome the complex ramifications of post-truth. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.Keywords: post-truth, pedagogy, ethics, English, education
Procedia PDF Downloads 72325 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen
Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez
Abstract:
Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.Keywords: beekeeping, drying, food processing, food safety
Procedia PDF Downloads 104324 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 265323 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study
Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang
Abstract:
Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks
Procedia PDF Downloads 203322 Evaluation: Developing An Appropriate Survey Instrument For E-Learning
Authors: Brenda Ravenscroft, Ulemu Luhanga, Bev King
Abstract:
A comprehensive evaluation of online learning needs to include a blend of educational design, technology use, and online instructional practices that integrate technology appropriately for developing and delivering quality online courses. Research shows that classroom-based evaluation tools do not adequately capture the dynamic relationships between content, pedagogy, and technology in online courses. Furthermore, studies suggest that using classroom evaluations for online courses yields lower than normal scores for instructors, and may affect faculty negatively in terms of administrative decisions. In 2014, the Faculty of Arts and Science at Queen’s University responded to this evidence by seeking an alternative to the university-mandated evaluation tool, which is designed for classroom learning. The Faculty is deeply engaged in e-learning, offering large variety of online courses and programs in the sciences, social sciences, humanities and arts. This paper describes the process by which a new student survey instrument for online courses was developed and piloted, the methods used to analyze the data, and the ways in which the instrument was subsequently adapted based on the results. It concludes with a critical reflection on the challenges of evaluating e-learning. The Student Evaluation of Online Teaching Effectiveness (SEOTE), developed by Arthur W. Bangert in 2004 to assess constructivist-compatible online teaching practices, provided the starting point. Modifications were made in order to allow the instrument to serve the two functions required by the university: student survey results provide the instructor with feedback to enhance their teaching, and also provide the institution with evidence of teaching quality in personnel processes. Changes were therefore made to the SEOTE to distinguish more clearly between evaluation of the instructor’s teaching and evaluation of the course design, since, in the online environment, the instructor is not necessarily the course designer. After the first pilot phase, involving 35 courses, the results were analyzed using Stobart's validity framework as a guide. This process included statistical analyses of the data to test for reliability and validity, student and instructor focus groups to ascertain the tool’s usefulness in terms of the feedback it provided, and an assessment of the utility of the results by the Faculty’s e-learning unit responsible for supporting online course design. A set of recommendations led to further modifications to the survey instrument prior to a second pilot phase involving 19 courses. Following the second pilot, statistical analyses were repeated, and more focus groups were used, this time involving deans and other decision makers to determine the usefulness of the survey results in personnel processes. As a result of this inclusive process and robust analysis, the modified SEOTE instrument is currently being considered for adoption as the standard evaluation tool for all online courses at the university. Audience members at this presentation will be stimulated to consider factors that differentiate effective evaluation of online courses from classroom-based teaching. They will gain insight into strategies for introducing a new evaluation tool in a unionized institutional environment, and methodologies for evaluating the tool itself.Keywords: evaluation, online courses, student survey, teaching effectiveness
Procedia PDF Downloads 266321 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles
Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail
Abstract:
In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy
Procedia PDF Downloads 153320 Comparative Research on Culture-Led Regeneration across Cities in China
Authors: Fang Bin Guo, Emma Roberts, Haibin Du, Yonggang Wang, Yu Chen, Xiuli Ge
Abstract:
This paper explores the findings so far from a major externally-funded project which operates internationally in China, Germany and the UK. The research team is working in the context of the redevelopment of post-industrial sites in China and how these might be platforms for creative enterprises and thereby, the economy and welfare to flourish. Results from the project are anticipated to inform urban design policies in China and possibly farther afield. The research has utilised ethnographic studies and participatory design methods to investigate alternative strategies for sustainable urban renewal of China’s post-industrial areas. Additionally, it has undertaken comparative studies of successful examples of European and Chinese urban regeneration cases. The international cross-disciplinary team has been seeking different opportunities for developing relevant creative industries whilst retaining cultural and industrial heritage. This paper will explore the research conducted so far by the team and offer initial findings. Findings point out the development challenges of cities respecting the protection of local culture/heritages, history of the industries and transformation of the local economies. The preliminary results and pilot analysis of the current research have demonstrated that local government policyholders, business investors/developers and creative industry practitioners are the three major stakeholders that will impact city revitalisations. These groups are expected to work together with asynchronous vision in order for redevelopments to be successful. Meanwhile, local geography, history, culture, politics, economy and ethnography have been identified as important factors that impact on project design and development during urban transformations. Data is being processed from the team’s research conducted across the focal Western and Chinese cities. This has provided theoretical guidance and practical support to the development of significant experimental projects. Many were re-examined with a more international perspective, and adjustments have been based on the conclusions of the research. The observations and research are already generating design solutions in terms of ascertaining essential site components, layouts, visual design and practical facilities for regenerated sites. Two significant projects undertaken by this project team have been nominated by the central Chinese government as the most successful exemplars. They have been listed as outstanding national industry heritage projects; in particular, one of them was nominated by ArchDaily as Building of the Year 2019, and so this project outcome has made a substantial contribution to research and innovation. In summary, this paper will outline the funded project, discuss the work conducted so far, and pinpoint the initial discoveries. It will detail the future steps and indicate how these will impact on national and local governments in China, designers, local citizens and building users.Keywords: cultural & industrial heritages, ethnographic research, participatory design, regeneration of post-industrial sites, sustainable
Procedia PDF Downloads 147319 Optimal Framework of Policy Systems with Innovation: Use of Strategic Design for Evolution of Decisions
Authors: Yuna Lee
Abstract:
In the current policy process, there has been a growing interest in more open approaches that incorporate creativity and innovation based on the forecasting groups composed by the public and experts together into scientific data-driven foresight methods to implement more effective policymaking. Especially, citizen participation as collective intelligence in policymaking with design and deep scale of innovation at the global level has been developed and human-centred design thinking is considered as one of the most promising methods for strategic foresight. Yet, there is a lack of a common theoretical foundation for a comprehensive approach for the current situation of and post-COVID-19 era, and substantial changes in policymaking practice are insignificant and ongoing with trial and error. This project hypothesized that rigorously developed policy systems and tools that support strategic foresight by considering the public understanding could maximize ways to create new possibilities for a preferable future, however, it must involve a better understating of Behavioural Insights, including individual and cultural values, profit motives and needs, and psychological motivations, for implementing holistic and multilateral foresight and creating more positive possibilities. To what extent is the policymaking system theoretically possible that incorporates the holistic and comprehensive foresight and policy process implementation, assuming that theory and practice, in reality, are different and not connected? What components and environmental conditions should be included in the strategic foresight system to enhance the capacity of decision from policymakers to predict alternative futures, or detect uncertainties of the future more accurately? And, compared to the required environmental condition, what are the environmental vulnerabilities of the current policymaking system? In this light, this research contemplates the question of how effectively policymaking practices have been implemented through the synthesis of scientific, technology-oriented innovation with the strategic design for tackling complex societal challenges and devising more significant insights to make society greener and more liveable. Here, this study conceptualizes the notions of a new collaborative way of strategic foresight that aims to maximize mutual benefits between policy actors and citizens through the cooperation stemming from evolutionary game theory. This study applies mixed methodology, including interviews of policy experts, with the case in which digital transformation and strategic design provided future-oriented solutions or directions to cities’ sustainable development goals and society-wide urgent challenges such as COVID-19. As a result, artistic and sensual interpreting capabilities through strategic design promote a concrete form of ideas toward a stable connection from the present to the future and enhance the understanding and active cooperation among decision-makers, stakeholders, and citizens. Ultimately, an improved theoretical foundation proposed in this study is expected to help strategically respond to the highly interconnected future changes of the post-COVID-19 world.Keywords: policymaking, strategic design, sustainable innovation, evolution of cooperation
Procedia PDF Downloads 194318 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)
Authors: Ahmad Kayvani Fard, Yehia Manawi
Abstract:
Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation
Procedia PDF Downloads 227317 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics
Authors: Su Mon Saw, Joe Rothnagel
Abstract:
Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs
Procedia PDF Downloads 188316 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys
Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio
Abstract:
Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling
Procedia PDF Downloads 221315 From the Classroom to Digital Learning Environments: An Action Research on Pedagogical Practices in Higher Education
Authors: Marie Alexandre, Jean Bernatchez
Abstract:
This paper focuses on the complexity of the face-to-face-to-distance learning transition process. Our research action aims to support the process of transition from classroom to distance learning for teachers in higher education with regard to pedagogical practices that can meet the various needs of students using digital learning environments. In Quebec and elsewhere in the world, the advent of digital education is helping to transform teaching, which is significantly changing the role of teachers. While distance education implies a dissociation of teaching and learning to a variable degree in space and time, distance education (DE) is becoming more and increasingly becoming a preferred option for maintaining the delivery of certain programs and providing access to programs and to provide access to quality activities throughout Quebec. Given the impact of teaching practices on educational success, this paper reports on the results of three research objectives: 1) To document teachers' knowledge of teaching in distance education through the design, experimentation and production of a repertoire of the determinants of pedagogical practices in response to students' needs. 2) Explain, according to a gendered logic, the adequacy between the pedagogical practices implemented in distance learning and the response to the profiles and needs expressed by students using digital learning environments; 3) Produce a model of a support approach during the process of transition from classroom to distance learning at the college level. A mixed methodology, i.e., a quantitative component (questionnaire survey) and a qualitative component (explanatory interviews and living lab) was used in cycles that were part of an ongoing validation process. The intervention includes the establishment of a professional collaboration group, webinars training webinars for the participating teachers on the didactic issue of knowledge-teaching in FAD, the didactic use of technologies, and the differentiated socialization models of educational success in college education. All of the tools developed will be used by partners in the target environment as well as by all teacher educators, students in initial teacher training, practicing teachers, and the general public. The results show that access to training leading to qualifications and commitment to educational success reflects the existing links between the people in the educational community. The relational stakes of being present in distance education take on multiple configurations and different dimensions of learning testify to needs and realities that are sometimes distinct depending on the life cycle. This project will be of interest to partners in the targeted field as well as to all teacher trainers, students in initial teacher training, practicing college teachers, and to university professors. The entire educational community will benefit from digital resources in education. The scientific knowledge resulting from this action research will benefit researchers in the fields of pedagogy, didactics, teacher training and pedagogy in higher education in a digital context.Keywords: action research, didactics, digital learning environment, distance learning, higher education, pedagogy technological, pedagogical content knowledge
Procedia PDF Downloads 87314 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 165313 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 392312 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important
Authors: Eleni Karasavvidou
Abstract:
Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.Keywords: representations, context analysis, reviews, sexist stereotypes
Procedia PDF Downloads 84311 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a Critical-Ethical English Pedagogy
Authors: Farah Vierra
Abstract:
Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth,” defined by the Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief,” came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those to who this truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential to provide marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate have been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of this is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are being questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to contend with a complex question: how can the teaching of English equip students with the ability to critically and ethically scrutinise truth claims whilst also mediating the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In order to address this question, this paper will first examine the challenges that confront students as a result of post-truth. Following this, the paper will elucidate the role English education can play in helping students overcome the complex demands of the post-truth age. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.Keywords: post-truth, pedagogy, ethics, english, education
Procedia PDF Downloads 66310 Investigations at the Settlement of Oglankala
Authors: Ayten Tahirli
Abstract:
Settlements and grave monuments discovered by archeological excavations conducted in Nakhchivan Autonomous Republic have a special place in studying the Ancient history of Azerbaijan between the 4th century B.C. and the 3rd century A.C. From this point of view, the archeological excavations and investigations conducted at Oglankala, Goshatapa, Babatapa, Pusyan, Agvantapa, Meydantapa and other monuments in Nakhchivan have a specific place. From this point of view, the conclusions of archeological research conducted at the Oglankala settlement enable studying of Nakhchivan history, economic life and trade relationships broadly. Oglankala, which is located on Garatapa Mountain with a space of 50 ha, was the largest fortress in Nakhchivan and one of the largest fortresses in the South Caucasus during the Middle Iron Age. The territory where the monument is located is very important in terms of keeping Sharur Lowland, which has great importance for agriculture and is the most productive territory in Nakhchivan, where Arpachay passes starting from the Lesser Caucasus. During the excavations between 1988 and 1989 at Oglankala, covering the fortress's history belonging to the Early and Middle Iron Ages, indisputable proofs showing that the territory was an important political center were discovered at that territory. Oglankala was the capital city of an independent government during the Middle Iron Age. It maintained economic and cultural relationships with the neighboring Urartu Government and was the capital city of a city government covered by a strong protection system in the centuries after the collapse of the Achaemenid Empire. It is need say that broader archeological excavations at Oglankala City were first started by Vali Bakhshaliyev, the Department Head of the Institute of History, Ethnography and Archeology of ANAS Nakhchivan Branch. Between 1988 and 1989, V.B. Bakhshaliyev conducted an excavation within an area of 320 square meters at Oglankala. Since 2006, Oglankala has become a research object for the International Azerbaijan-USA archeological expedition. In 2006, Lauren Ristvet from Pennsylvania State University, Veli Bakhshaliyev from the Nakhchivan Branch of Azerbaijan National Academy of Sciences and Safar Ashurov from Baku Office of Azerbaijan National Academy of Sciences, together with their other colleagues and students, started to study the ancient history of that magic area. During the archeological research conducted by an international expedition between 2008 and 2011 under the supervision of Vali Bakhshaliyev, the remnants of a palace and the protective walls of a citadel constructed between late 9th century B.C. and early 8th century A.C. were discovered in that residential area. It was found out that Oglankala was the capital city of a small government established at Sharur Lowland during the Middle Iron Age and struggled against the Urartu by establishing a union with the local tribes. That government had a specific cuneiform script. Between the 4th and 2nd centuries B.C., Oglankala and the territory it covered was one of the major political centers of the Atropathena Government.Keywords: Nakhchivan, Oglankala, settlement, ceramic, archaeological excavation
Procedia PDF Downloads 78