Search results for: linear and body measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9324

Search results for: linear and body measurements

234 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 212
233 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 119
232 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 261
231 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 86
230 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 307
229 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 97
228 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors

Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic

Abstract:

If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.

Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide

Procedia PDF Downloads 244
227 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 110
226 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 236
225 Green Synthesis (Using Environment Friendly Bacteria) of Silver-Nanoparticles and Their Application as Drug Delivery Agents

Authors: Sutapa Mondal Roy, Suban K. Sahoo

Abstract:

The primary aim of this work is to synthesis silver nanoparticles (AgNPs) through environmentally benign routes to avoid any chemical toxicity related undesired side effects. The nanoparticles were stabilized with drug ciprofloxacin (Cp) and were studied for their effectiveness as drug delivery agent. Targeted drug delivery improves the therapeutic potential of drugs at the diseased site as well as lowers the overall dose and undesired side effects. The small size of nanoparticles greatly facilitates the transport of active agents (drugs) across biological membranes and allows them to pass through the smallest capillaries in the body that are 5-6 μm in diameter, and can minimize possible undesired side effects. AgNPs are non-toxic, inert, stable, and has a high binding capacity and thus can be considered as biomaterials. AgNPs were synthesized from the nutrient broth supernatant after the culture of environment-friendly bacteria Bacillus subtilis. The AgNPs were found to show the surface plasmon resonance (SPR) band at 425 nm. The Cp capped Ag nanoparticles formation was complete within 30 minutes, which was confirmed from absorbance spectroscopy. Physico-chemical nature of the AgNPs-Cp system was confirmed by Dynamic Light Scattering (DLS), Transmission Electron Microscopy (TEM) etc. The AgNPs-Cp system size was found to be in the range of 30-40 nm. To monitor the kinetics of drug release from the surface of nanoparticles, the release of Cp was carried out by careful dialysis keeping AgNPs-Cp system inside the dialysis bag at pH 7.4 over time. The drug release was almost complete after 30 hrs. During the drug delivery process, to understand the AgNPs-Cp system in a better way, the sincere theoretical investigation is been performed employing Density Functional Theory. Electronic charge transfer, electron density, binding energy as well as thermodynamic properties like enthalpy, entropy, Gibbs free energy etc. has been predicted. The electronic and thermodynamic properties, governed by the AgNPs-Cp interactions, indicate that the formation of AgNPs-Cp system is exothermic i.e. thermodynamically favorable process. The binding energy and charge transfer analysis implies the optimum stability of the AgNPs-Cp system. Thus, the synthesized Cp-Ag nanoparticles can be effectively used for biological purposes due to its environmentally benign routes of synthesis procedures, which is clean, biocompatible, non-toxic, safe, cost-effective, sustainable and eco-friendly. The Cp-AgNPs as biomaterials can be successfully used for drug delivery procedures due to slow release of drug from nanoparticles over a considerable period of time. The kinetics of the drug release show that this drug-nanoparticle assembly can be effectively used as potential tools for therapeutic applications. The ease of synthetic procedure, lack of possible chemical toxicity and their biological activity along with excellent application as drug delivery agent will open up vista of using nanoparticles as effective and successful drug delivery agent to be used in modern days.

Keywords: silver nanoparticles, ciprofloxacin, density functional theory, drug delivery

Procedia PDF Downloads 357
224 Relationship of Entrepreneurial Ecosystem Factors and Entrepreneurial Cognition: An Exploratory Study Applied to Regional and Metropolitan Ecosystems in New South Wales, Australia

Authors: Sumedha Weerasekara, Morgan Miles, Mark Morrison, Branka Krivokapic-Skoko

Abstract:

This paper is aimed at exploring the interrelationships among entrepreneurial ecosystem factors and entrepreneurial cognition in regional and metropolitan ecosystems. Entrepreneurial ecosystem factors examined include: culture, infrastructure, access to finance, informal networks, support services, access to universities, and the depth and breadth of the talent pool. Using a multivariate approach we explore the impact of these ecosystem factors or elements on entrepreneurial cognition. In doing so, the existing body of knowledge from the literature on entrepreneurial ecosystem and cognition have been blended to explore the relationship between entrepreneurial ecosystem factors and cognition in a way not hitherto investigated. The concept of the entrepreneurial ecosystem has received increased attention as governments, universities and communities have started to recognize the potential of integrated policies, structures, programs and processes that foster entrepreneurship activities by supporting innovation, productivity and employment growth. The notion of entrepreneurial ecosystems has evolved and grown with the advancement of theoretical research and empirical studies. Importance of incorporating external factors like culture, political environment, and the economic environment within a single framework will enhance the capacity of examining the whole systems functionality to better understand the interaction of the entrepreneurial actors and factors within a single framework. The literature on clusters underplays the role of entrepreneurs and entrepreneurial management in creating and co-creating organizations, markets, and supporting ecosystems. Entrepreneurs are only one actor following a limited set of roles and dependent upon many other factors to thrive. As a consequence, entrepreneurs and relevant authorities should be aware of the other actors and factors with which they engage and rely, and make strategic choices to achieve both self and also collective objectives. The study uses stratified random sampling method to collect survey data from 12 different regions in regional and metropolitan regions of NSW, Australia. A questionnaire was administered online among 512 Small and medium enterprise owners operating their business in selected 12 regions in NSW, Australia. Data were analyzed using descriptive analyzing techniques and partial least squares - structural equation modeling. The findings show that even though there is a significant relationship between each and every entrepreneurial ecosystem factors, there is a weak relationship between most entrepreneurial ecosystem factors and entrepreneurial cognition. In the metropolitan context, the availability of finance and informal networks have the largest impact on entrepreneurial cognition while culture, infrastructure, and support services having the smallest impact and the talent pool and universities having a moderate impact on entrepreneurial cognition. Interestingly, in a regional context, culture, availability of finance, and the talent pool have the highest impact on entrepreneurial cognition, while informal networks having the smallest impact and the remaining factors – infrastructure, universities, and support services have a moderate impact on entrepreneurial cognition. These findings suggest the need for a location-specific strategy for supporting the development of entrepreneurial cognition.

Keywords: academic achievement, colour response card, feedback

Procedia PDF Downloads 112
223 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 134
222 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 128
221 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+

Authors: Serena Salvi

Abstract:

Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.

Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality

Procedia PDF Downloads 76
220 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 365
219 Evaluation of Herbal Extracts for Their Potential Application as Skin Prebiotics

Authors: Anja I. Petrov, Milica B. Veljković, Marija M. Ćorović, Ana D. Milivojević, Milica B. Simović, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

One of the fundamental requirements for overall human well-being is a stable and balanced microbiome. Aside from the microorganisms that reside within the body, a large number of microorganisms, especially bacteria, swarming the human skin is in homeostasis with the host and represents a skin microbiota. Even though the immune system of the skin is capable of distinguishing between commensal and potentially harmful transient bacteria, the cutaneous microbial balance can be disrupted under certain circumstances. In that case, a reduction in the skin microbiota diversity, as well as changes in metabolic activity, results in dermal infections and inflammation. Probiotics and prebiotics have the potential to play a significant role in the treatment of these skin disorders. The most common resident bacteria found on the skin, Staphylococcus epidermidis, can act as a potential skin probiotic, contributing to the protection of healthy skin from pathogen colonization, such as Staphylococcus aureus, which is related to atopic dermatitis exacerbation. However, as it is difficult to meet regulations in cosmetic products, another therapy approach could be topical prebiotic supplementation of the skin microbiota. In recent research, polyphenols are attracting scientists' interest as biomolecules with possible prebiotic effects on the skin microbiota. This research aimed to determine how herbal extracts rich in different polyphenolic compounds (lemon balm, St. John's wort, coltsfoot, pine needle, and yarrow) affected the growth of S. epidermidis and S. aureus. The first part of the study involved screening plants to determine if they could be regarded as probable candidates to be skin prebiotics. The effect of each plant on bacterial growth was examined by supplementing the nutrient medium with their extracts and comparing it with control samples (without extract). The results obtained after 24 h of incubation showed that all tested extracts influenced the growth of the examined bacteria to some extent. Since lemon balm and St. John's wort extracts displayed bactericidal activity against S. epidermidis, whereas coltsfoot inhibited both bacteria equally, they were not explored further. On the other hand, pine needles and yarrow extract led to an increase in S. epidermidis/S. aureus ratio, making them prospective candidates to be used as skin prebiotics. By examining the prebiotic effect of two extracts at different concentrations, it was revealed that, in the case of yarrow, 0.1% of extract dry matter in the fermentation medium was optimal, while for the pine needle extract, a concentration of 0.05% was preferred, since it selectively stimulated S. epidermidis growth and inhibited S. aureus proliferation. Additionally, the total polyphenols and flavonoid content of the two extracts were determined, revealing different concentrations and polyphenol profiles. Since yarrow and pine extracts affected the growth of skin bacteria in a dose-dependent manner, by carefully selecting the quantities of these extracts, and thus polyphenols content, it is possible to achieve desirable alterations of skin microbiota composition, which may be suitable for the treatment of atopic dermatitis.

Keywords: herbal extracts, polyphenols, skin microbiota, skin prebiotics

Procedia PDF Downloads 143
218 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 101
217 Geochemical Evolution of Microgranular Enclaves Hosted in Cambro-Ordovician Kyrdem Granitoids, Meghalaya Plateau, Northeast India

Authors: K. Mohon Singh

Abstract:

Cambro-Ordovician (512.5 ± 8.7 Ma) felsic magmatism in the Kyrdem region of Meghalaya plateau, herewith referred to as Kyrdem granitoids (KG), intrudes the low-grade Shillong Group of metasediments and Precambrian Basement Gneissic complex forming an oval-shaped plutonic body with longer axis almost trending N-S. Thermal aureole is poorly developed or covered under the alluvium. KG exhibit very coarse grained porphyritic texture with abundant K-feldspar megacrysts (up to 9cm long) and subordinate amount of amphibole, biotite, plagioclase, and quartz. The size of K-feldspar megacrysts increases from margin (Dwarksuid) to the interior (Kyrdem) of the KG pluton. Late felsic pulses as fine grained granite, leucocratic (aplite), and pegmatite veins intrude the KG at several places. Grey and pink varieties of KG can be recognized, but pink colour of KG is the result of post-magmatic fluids, which have not affected the magnetic properties of KG. Modal composition of KG corresponds to quartz monzonite, monzogranite, and granodiorite. KG has been geochemically characterized as metaluminous (I-type) to peraluminous (S-type) granitoids. The KG is characterized by development of variable attitude of primary foliations mostly marked along the margin of the pluton and is located at the proximity of Tyrsad-Barapani lineament. The KG contains country rock xenoliths (amphibolite, gneiss, schist, etc.) which are mostly confined to the margin of the pluton, and microgranular enclaves (ME) are hosted in the porphyritic variety of KG. Microgranular Enclaves (ME) in Kyrdem Granitoids are fine- to medium grained, mesocratic to melanocratic, phenocryst bearing or phenocryst-free, rounded to ellipsoidal showing typical magmatic textures. Mafic-felsic phenocrysts in ME are partially corroded and dissolved because of their involvement in magma-mixing event, and thus represent xenocrysts. Sharp to diffused contacts of ME with host Kyrdem Granitoids, fine grained nature and presence of acicular apatite in ME suggest comingling and undercooling of coeval, semi-solidified ME magma within partly crystalline felsic host magma. Geochemical features recognize the nature of ME (molar A/CNK=0.76-1.42) and KG (molar A/CNK =0.41-1.75) similar to hybrid-type formed by mixing of mantle-derived mafic and crustal-derived felsic magmas. Major and trace including rare earth elements variations of ME suggest the involvement of combined processes such as magma mixing, mingling and crystallization differentiation in the evolution of ME but KG variations appear primarily controlled by fractionation of plagioclase, hornblende biotite, and accessory phases. Most ME are partially to nearly re-equilibrate chemically with felsic host KG during magma mixing and mingling processes.

Keywords: geochemistry, Kyrdem Granitoids, microgranular enclaves, Northeast India

Procedia PDF Downloads 91
216 Strategies for Urban-Architectural Design for the Sustainable Recovery of the Huayla Stuary in Puerto Bolivar, Machala-Ecuador

Authors: Soledad Coronel Poma, Lorena Alvarado Rodriguez

Abstract:

The purpose of this project is to design public space through urban-architectural strategies that help to the sustainable recovery of the Huayla estuary and the revival of tourism in this area. This design considers other sustainable and architectural ideas used in similar cases, along with national and international regulations for saving shorelines in danger. To understand the situation of this location, Puerto Bolivar is the main port of the Province of El Oro and of the south of the country, where 90,000 national and foreign tourists pass through all year round. For that reason, a physical-urban, social, and environmental analysis of the area was carried out through surveys and conversations with the community. This analysis showed that around 70% of people feel unsatisfied and concerned about the estuary and its surroundings. Crime, absence of green areas, bad conservation of shorelines, lack of tourists, poor commercial infrastructure, and the spread of informal commerce are the main issues to be solved. As an intervention project whose main goal is that residents and tourists have contact with native nature and enjoy doing local activities, three main strategies: mobility, ecology, and urban –architectural are proposed to recover the estuary and its surroundings. First of all, the design of this public space is based on turning the estuary location into a linear promenade that could be seen as a tourist corridor, which would help to reduce pollution, increase green spaces and improve tourism. Another strategy aims to improve the economy of the community through some local activities like fishing and sailing and the commerce of fresh seafood, both raw products and in restaurants. Furthermore, in support of the environmental approach, some houses are rebuilt as sustainable houses using local materials and rearranged into blocks closer to the commercial area. Finally, the planning incorporates the use of many plants such as palms, sameness trees, and mangroves around the area to encourage people to get in touch with nature. The results of designing this space showed an increase in the green area per inhabitant index. It went from 1.69 m²/room to 10.48 m²/room, with 12 096 m² of green corridors and the incorporation of 5000 m² of mangroves at the shoreline. Additionally, living zones also increased with the creation of green areas taking advantage of the existing nature and implementing restaurants and recreational spaces. Moreover, the relocation of houses and buildings helped to free estuary's shoreline, so people are now in more comfortable places closer to their workplaces. Finally, dock spaces are increased, reaching the capacity of the boats and canoes, helping to organize the area in the estuary. To sum up, this project searches the improvement of the estuary environment with its shoreline and surroundings that include the vegetation, infrastructure and people with their local activities, achieving a better quality of life, attraction of tourism, reduction of pollution and finally getting a full recovered estuary as a natural ecosystem.

Keywords: recover, public space, stuary, sustainable

Procedia PDF Downloads 109
215 The Product Innovation Using Nutraceutical Delivery System on Improving Growth Performance of Broiler

Authors: Kitti Supchukun, Kris Angkanaporn, Teerapong Yata

Abstract:

The product innovation using a nutraceutical delivery system on improving the growth performance of broilers is the product planning and development to solve the antibiotics banning policy incurred in the local and global livestock production system. Restricting the use of antibiotics can reduce the quality of chicken meat and increase pathogenic bacterial contamination. Although other alternatives were used to replace antibiotics, the efficacy was inconsistent, reflecting on low chicken growth performance and contaminated products. The product innovation aims to effectively deliver the selected active ingredients into the body. This product is tested on the pharmaceutical lab scale and on the farm-scale for market feasibility in order to create product innovation using the nutraceutical delivery system model. The model establishes the product standardization and traceable quality control process for farmers. The study is performed using mixed methods. Starting with a qualitative method to find the farmers' (consumers) demands and the product standard, then the researcher used the quantitative research method to develop and conclude the findings regarding the acceptance of the technology and product performance. The survey has been sent to different organizations by random sampling among the entrepreneur’s population including integrated broiler farm, broiler farm, and other related organizations. The mixed-method results, both qualitative and quantitative, verify the user and lead users' demands since they provide information about the industry standard, technology preference, developing the right product according to the market, and solutions for the industry problems. The product innovation selected nutraceutical ingredients that can solve the following problems in livestock; bactericidal, anti-inflammation, gut health, antioxidant. The combinations of the selected nutraceutical and nanostructured lipid carriers (NLC) technology aim to improve chemical and pharmaceutical components by changing the structure of active ingredients into nanoparticle, which will be released in the targeted location with accurate concentration. The active ingredients in nanoparticle form are more stable, elicit antibacterial activity against pathogenic Salmonella spp and E.coli, balance gut health, have antioxidant and anti-inflammation activity. The experiment results have proven that the nutraceuticals have an antioxidant and antibacterial activity which also increases the average daily gain (ADG), reduces feed conversion ratio (FCR). The results also show a significant impact on the higher European Performance Index that can increase the farmers' profit when exporting. The product innovation will be tested in technology acceptance management methods from farmers and industry. The production of broiler and commercialization analyses are useful to reduce the importation of animal supplements. Most importantly, product innovation is protected by intellectual property.

Keywords: nutraceutical, nano structure lipid carrier, anti-microbial drug resistance, broiler, Salmonella

Procedia PDF Downloads 144
214 Assessing the Risk of Socio-economic Drought: A Case Study of Chuxiong Yi Autonomous Prefecture, China

Authors: Mengdan Guo, Zongmin Wang, Haibo Yang

Abstract:

Drought is one of the most complex and destructive natural disasters, with a huge impact on both nature and society. In recent years, adverse climate conditions and uncontrolled human activities have exacerbated the occurrence of global droughts, among which socio-economic droughts are closely related to human survival. The study of socio-economic drought risk assessment is crucial for sustainable social development. Therefore, this study comprehensively considered the risk of disaster causing factors, the exposure level of the disaster-prone environment, and the vulnerability of the disaster bearing body to construct a socio-economic drought risk assessment model for Chuxiong Prefecture in Yunnan Province. Firstly, a threedimensional frequency analysis of intensity area duration drought was conducted, followed by a statistical analysis of the drought risk of the socio-economic system. Secondly, a grid analysis model was constructed to assess the exposure levels of different agents and study the effects of drought on regional crop growth, industrial economic growth, and human consumption thresholds. Thirdly, an agricultural vulnerability model for different irrigation levels was established by using the DSSAT crop model. Industrial economic vulnerability and domestic water vulnerability under the impact of drought were investigated by constructing a standardized socio-economic drought index and coupling water loss. Finally, the socio-economic drought risk was assessed by combining hazard, exposure, and vulnerability. The results show that the frequency of drought occurrence in Chuxiong Prefecture, Yunnan Province is relatively high, with high population and economic exposure concentrated in urban areas of various counties and districts, and high agricultural exposure concentrated in mountainous and rural areas. Irrigation can effectively reduce agricultural vulnerability in Chuxiong, and the yield loss rate under the 20mm winter irrigation scenario decreased by 10.7% compared to the rain fed scenario. From the perspective of comprehensive risk, the distribution of long-term socio-economic drought risk in Chuxiong Prefecture is relatively consistent, with the more severe areas mainly concentrated in Chuxiong City and Lufeng County, followed by counties such as Yao'an, Mouding and Yuanmou. Shuangbai County has the lowest socio-economic drought risk, which is basically consistent with the economic distribution trend of Chuxiong Prefecture. And in June, July, and August, the drought risk in Chuxiong Prefecture is generally high. These results can provide constructive suggestions for the allocation of water resources and the construction of water conservancy facilities in Chuxiong Prefecture, and provide scientific basis for more effective drought prevention and control. Future research is in the areas of data quality and availability, climate change impacts, human activity impacts, and countermeasures for a more comprehensive understanding and effective response to drought risk in Chuxiong Prefecture.

Keywords: DSSAT model, risk assessment, socio-economic drought, standardized socio-economic drought index

Procedia PDF Downloads 18
213 Project Management and International Development: Competencies for International Assignment

Authors: M. P. Leroux, C. Coulombe

Abstract:

Projects are popular vehicles through which international aid is delivered in developing countries. To achieve their objectives, many northern organizations develop projects with local partner organizations in the developing countries through technical assistance projects. International aid and international development projects precisely have long been criticized for poor results although billions are spent every year. Little empirical research in the field of project management has the focus on knowledge transfer in international development context. This paper focuses particularly on personal dimensions of international assignees participating in project within local team members in the host country. We propose to explore the possible links with a human resource management perspective in order to shed light on the less research problematic of knowledge transfer in development cooperation projects. The process leading to capacity building being far complex, involving multiple dimensions and far from being linear, we propose here to assess if traditional research on expatriate in multinational corporations pertain to the field of project management in developing countries. The following question is addressed: in the context of international development project cooperation, what personal determinants should the selection process focus when looking to fill a technical assistance position in a developing country? To answer that question, we first reviewed the literature on expatriate in the context of inter organizational knowledge transfer. Second, we proposed a theoretical framework combining perspectives of development studies and management to explore if parallels can be draw between traditional international assignment and technical assistance project assignment in developing countries. We conducted an exploratory study using case studies from technical assistance initiatives led in Haiti, a country in Central America. Data were collected from multiple sources following qualitative study research methods. Direct observations in the field were allowed by local leaders of six organization; individual interviews with present and past international assignees, individual interview with local team members, and focus groups were organized in order to triangulate information collected. Contrary from empirical research on knowledge transfer in multinational corporations, results tend to show that technical expertise rank well behind many others characteristics. Results tend to show the importance of soft skills, as a prerequisite to succeed in projects where local team have to collaborate. More importantly, international assignees who were talking knowledge sharing instead of knowledge transfer seemed to feel more satisfied at the end of their mandate than the others. Reciprocally, local team members who perceived to have participated in a project with an expat looking to share instead of aiming to transfer knowledge seemed to describe the results of project in more positive terms than the others. Results obtained from this exploratory study open the way for a promising research agenda in the field of project management. It emphasises the urgent need to achieve a better understanding on the complex set of soft skills project managers or project chiefs would benefit to develop, in particular, the ability to absorb knowledge and the willingness to share one’s knowledge.

Keywords: international assignee, international project cooperation, knowledge transfer, soft skills

Procedia PDF Downloads 116
212 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery

Abstract:

Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.

Keywords: gender, football, soccer, traumatic brain injury

Procedia PDF Downloads 119
211 Effect of Toxic Metals Exposure on Rat Behavior and Brain Morphology: Arsenic, Manganese

Authors: Tamar Bikashvili, Tamar Lordkipanidze, Ilia Lazrishvili

Abstract:

Heavy metals remain one of serious environmental problems due to their toxic effects. The effect of arsenic and manganese compounds on rat behavior and neuromorphology was studied. Wistar rats were assigned to four groups: rats in control group were given regular water, while rats in other groups drank water with final manganese concentration of 10 mg/L (group A), 20 mg/L (group B) and final arsenic concentration 68 mg/L (group C), respectively, for a month. To study exploratory and anxiety behavior and also to evaluate aggressive performance in “home cage” rats were tested in “Open Field” and to estimate learning and memory status multi-branched maze was used. Statistically significant increase of motor and oriental-searching activity in experimental groups was revealed by an open field test, which was expressed in increase of number of lines crossed, rearing and hole reflexes. Obtained results indicated the suppression of fear in rats exposed to manganese. Specifically, this was estimated by the frequency of getting to the central part of the open field. Experiments revealed that 30-day exposure to 10 mg/ml manganese did not stimulate aggressive behavior in rats, while exposure to the higher dose (20 mg/ml), 37% of initially non-aggressive animals manifested aggressive behavior. Furthermore, 25% of rats were extremely aggressive. Obtained data support the hypothesis that excess manganese in the body is one of the immediate causes of enhancement of interspecific predatory aggressive and violent behavior in rats. It was also discovered that manganese intoxication produces non-reversible severe learning disability and insignificant, reversible memory disturbances. Studies of rodents exposed to arsenic also revealed changes in the learning process. As it is known, the distribution of metal ions differs in various brain regions. The principle manganese accumulation was observed in the hippocampus and in the neocortex, while arsenic was predominantly accumulated in nucleus accumbens, striatum, and cortex. These brain regions play an important role in the regulation of emotional state and motor activity. Histopathological analyzes of brain sections illustrated two morphologically distinct altered phenotypes of neurons: (1) shrunk cells with indications of apoptosis - nucleus and cytoplasm were very difficult to be distinguished, the integrity of neuronal cytoplasm was not disturbed; and (2) swollen cells - with indications of necrosis. Pyknotic nucleus, plasma membrane disruption and cytoplasmic vacuoles were observed in swollen neurons and they were surrounded by activated gliocytes. It’s worth to mention that in the cortex the majority of damaged neurons were apoptotic while in subcortical nuclei –neurons were mainly necrotic. Ultrastructural analyses demonstrated that all cell types in the cortex and the nucleus caudatus represent destructed mitochondria, widened neurons’ vacuolar system profiles, increased number of lysosomes and degeneration of axonal endings.

Keywords: arsenic, manganese, behavior, learning, neuron

Procedia PDF Downloads 329
210 A Study of the Atlantoaxial Fracture or Dislocation in Motorcyclists with Helmet Accidents

Authors: Shao-Huang Wu, Ai-Yun Wu, Meng-Chen Wu, Chun-Liang Wu, Kai-Ping Shaw, Hsiao-Ting Chen

Abstract:

Objective: To analyze the forensic autopsy data of known passengers and compare it with the National database of the autopsy report in 2017, and obtain the special patterned injuries, which can be used as the reference for the reconstruction of hit-and-run motor vehicle accidents. Methods: Analyze the items of the Motor Vehicle Accident Report, including Date of accident, Time occurred, Day, Acc. severity, Acc. Location, Acc. Class, Collision with Vehicle, Motorcyclists Codes, Safety equipment use, etc. Analyzed the items of the Autopsy Report included, including General Description, Clothing and Valuables, External Examination, Head and Neck Trauma, Trunk Trauma, Other Injuries, Internal Examination, Associated Items, Autopsy Determinations, etc. Materials: Case 1. The process of injury formation: the car was chased forward and collided with the scooter. The passenger wearing the helmet fell to the ground. The helmet crashed under the bottom of the sedan, and the bottom of the sedan was raised. Additionally, the sedan was hit on the left by the other sedan behind, resulting in the front sedan turning 180 degrees on the spot. The passenger’s head was rotated, and the cervical spine was fractured. Injuries: 1. Fracture of atlantoaxial joint 2. Fracture of the left clavicle, scapula, and proximal humerus 3. Fracture of the 1-10 left ribs and 2-7 right ribs with lung contusion and hemothorax 4. Fracture of the transverse process of 2-5 lumbar vertebras 5. Comminuted fracture of the right femur 6. Suspected subarachnoid space and subdural hemorrhage 7. Laceration of the spleen. Case 2. The process of injury formation: The motorcyclist wearing the helmet fell to the left by himself, and his chest was crushed by the car going straight. Only his upper body was under the car and the helmet finally fell off. Injuries: 1. Dislocation of atlantoaxial joint 2. Laceration on the left posterior occipital 3. Laceration on the left frontal 4. Laceration on the left side of the chin 5. Strip bruising on the anterior neck 6. Open rib fracture of the right chest wall 7. Comminuted fracture of both 1-12 ribs 8. Fracture of the sternum 9. Rupture of the left lung 10. Rupture of the left and right atria, heart tip and several large vessels 11. The aortic root is nearly transected 12. Severe rupture of the liver. Results: The common features of the two cases were the fracture or dislocation of the atlantoaxial joint and both helmets that were crashed. There were no atlantoaxial fractures or dislocations in 27 pedestrians (without wearing a helmet) versus motor vehicle accidents in 2017 the National database of an autopsy report, but there were two atlantoaxial fracture or dislocation cases in the database, both of which were cases of falling from height. Conclusion: The cervical spine fracture injury of the motorcyclist, who was wearing a helmet, is very likely to be a patterned injury caused by his/her fall and rollover under the sedan. It could provide a reference for forensic peers.

Keywords: patterned injuries, atlantoaxial fracture or dislocation, accident reconstruction, motorcycle accident with helmet, forensic autopsy data

Procedia PDF Downloads 58
209 The Effect of Students’ Social and Scholastic Background and Environmental Impact on Shaping Their Pattern of Digital Learning in Academia: A Pre- and Post-COVID Comparative View

Authors: Nitza Davidovitch, Yael Yossel-Eisenbach

Abstract:

The purpose of the study was to inquire whether there was a change in the shaping of undergraduate students’ digitally-oriented study pattern in the pre-Covid (2016-2017) versus post-Covid period (2022-2023), as affected by three factors: social background characteristics, high school, and academic background characteristics. These two-time points were cauterized by dramatic changes in teaching and learning at institutions of higher education. The data were collected via cross-sectional surveys at two-time points, in the 2016-2017 academic school year (N=443) and in the 2022-2023 school year (N=326). The questionnaire was distributed on social media and it includes questions on demographic background characteristics, previous studies in high school and present academic studies, and questions on learning and reading habits. Method of analysis: A. Statistical descriptive analysis, B. Mean comparison tests were conducted to analyze the variations in the mean score for the digitally-oriented learning pattern variable at two-time points (pre- and post-Covid) in relation to each of the independent variables. C. Analysis of variance was performed to test the main effects and the interactions. D. Applying linear regression, the research aimed to examine the combined effect of the independent variables on shaping students' digitally-oriented learning habits. The analysis includes four models. In all four models, the dependent variable is students’ perception of digitally oriented learning. The first model included social background variables; the second model included scholastic background as well. In the third model, the academic background variables were added, and the fourth model includes all the independent variables together with the variable of period (pre- and post-COVID). E. Factor analysis confirms using the principal component method with varimax rotation; the variables were constructed by a weighted mean of all the relevant statements merged to form a single variable denoting a shared content world. The research findings indicate a significant rise in students’ perceptions of digitally-oriented learning in the post-COVID period. From a gender perspective, the impact of COVID on shaping a digital learning pattern was much more significant for female students. The socioeconomic status perspective is eliminated when controlling for the period, and the student’s job is affected - more than all other variables. It may be assumed that the student’s work pattern mediates effects related to the convenience offered by digital learning regarding distance and time. The significant effect of scholastic background on shaping students’ digital learning patterns remained stable, even when controlling for all explanatory variables. The advantage that universities had over colleges in shaping a digital learning pattern in the pre-COVID period dissipated. Therefore, it can be said that after COVID, there was a change in how colleges shape students’ digital learning patterns in such a way that no institutional differences are evident with regard to shaping the digital learning pattern. The study shows that period has a significant independent effect on shaping students’ digital learning patterns when controlling for the explanatory variables.

Keywords: learning pattern, COVID, socioeconomic status, digital learning

Procedia PDF Downloads 22
208 Performance of a Lytic Bacteriophage Cocktail against Pseudomonas aeruginosa in Conditions That Simulate the Cystic Fibrosis Lung Environment

Authors: Isaac Martin, Abigail Lark, Sandra Morales, Eric W. Alton, Jane C. Davies

Abstract:

Objectives: The cystic fibrosis (CF) lung is a unique microbiological niche, wherein harmful bacteria persist for many years despite antibiotic therapy. Pseudomonas aeruginosa (Pa), the major culprit leading to lung decline and increased mortality, thrives in the lungs of patients with CF due to several factors that have been linked with poor antibiotic performance. Our group is investigating alternative therapies including bacteriophage cocktails with which we have previously demonstrated efficacy against planktonic organisms. In this study, we explored the effects of a 4-phage cocktail on Pa grown in two different conditions, intended to mirror the CF lung: a) alongside standard antibiotic treatment in pre-formed biofilms (structures formed by Pa-secreted exopolysaccharides which provide both physical and cell division barriers to antimicrobials and host defenses and b) in an acidic environment postulated to be present in the CF airway due both to the primary defect in bicarbonate secretion and secondary effects of inflammation. Methods: 16 Pa strains from CF patients at the Royal Brompton Hospital were selected based on sensitivity to a) ceftazidime/ tobramycin and b) the phage cocktail in a conventional plaque assay. To assess efficacy of phage in biofilms, 96 well plates with Pa (5x10⁷ CFU/ ml) were incubated in static conditions, allowing adherent bacterial colonies to form for 24 hr. Ceftazidime and tobramycin (both at 2 × MIC) were added, +/- bacteriophage (4x10⁸ PFU/mL) for a further 24 hr. Cell viability and biomass were estimated using fluorescent resazurin and crystal violet assays, respectively. To evaluate the effect of pH, strains were grown planktonically in shaking 96 well plates at pH 6.0, 6.6, 7.0 and 7.5 with tobramycin or phage, at varying concentrations. Cell viability was quantified by fluorescent resazurin assay. Results: For the biofilm assay, treatment groups were compared with untreated controls and expressed as percent reduction in cell viability and biomass. Addition of the 4-phage cocktail resulted in a 1.3-fold reduction in cell viability and 1.7-fold reduction in biomass (p < 0.001) when compared to standard antibiotic treatment alone. Notably, there was a 50 ± 15% reduction in cell viability and 60 ± 12% reduction in biomass (95% CI) for the 4 biofilms demonstrating the most resistance to antibiotic treatment. 83% of strains tested (n=6) showed decreased bacterial killing by tobramycin at acidic pHs (p < 0.01). However, 25% of strains (n=12) showed improved phage killing at acidic pHs (p < 0.05), with none showing the pattern of reduced efficacy at acidic pH demonstrated by tobramycin. Conclusion: The 4-phage anti-Pa cocktail tested against Pa performs well in pre-formed biofilms and in acidic environments; two conditions intended to mimic the CF lung. To our knowledge, these are the first data looking at the effects of subtle pH changes on phage-mediated bacterial killing in the context of Pa infection. These findings contribute to a growing body of evidence supporting the use of nebulised lytic bacteriophage as a treatment in the context of lung infection.

Keywords: biofilm, cystic fibrosis, pH, Pseudomonas aeruginosa, lytic bacteriophage

Procedia PDF Downloads 153
207 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 312
206 Burial Findings in Prehistory Qatar: Archaeological Perspective

Authors: Sherine El-Menshawy

Abstract:

Death, funerary beliefs and customs form an essential feature of belief systems and practices in many cultures. It is evident that during the pre-historical periods, various techniques of corpses burial and funerary rituals were conducted. Occasionally, corpses were merely buried in the sand, or in a grave where the body is placed in a contracted position- with knees drawn up under the chin and hands normally lying before the face- with mounds of sand, marking the grave or the bodies were burnt. However, common practice, that was demonstrable in the archaeological record, was burial. The earliest graves were very simple consisting of a shallow circular or oval pits in the ground. The current study focuses on the material culture at Qatar during the pre-historical period, specifically their funerary architecture and burial practices. Since information about burial customs and funerary practices in Qatar prehistory is both scarce and fragmentary, the importance of such study is to answer research questions related to funerary believes and burial habits during the early stages of civilization transformations at prehistory Qatar compared with Mesopotamia, since chronologically, the earliest pottery discovered in Qatar belongs to prehistoric Ubaid culture of Mesopotamia, that was collected from the excavations. This will lead to deep understanding of life and social status in pre-historical period at Qatar. The research also explores the relationship between pre-history Qatar funerary traditions and those of neighboring cultures in the Mesopotamia and Ancient Egypt, with the aim of ascertaining the distinctive aspects of pre-history Qatar culture, the reception of classical culture and the role it played in the creation of local cultural identities in the Near East. Methodologies of this study based on published books and articles in addition to unpublished reports of the Danish excavation team that excavated in and around Doha, Qatar archaeological sites from the 50th. The study is also constructed on compared material related to burial customs found in Mesopotamia. Therefore this current research: (i) Advances knowledge of the burial customs of the ancient people who inhabited Qatar, a study which is unknown recently to scholars, the study though will apply deep understanding of the history of ancient Qatar and its culture and values with an aim to share this invaluable human heritage. (ii) The study is of special significance for the field of studies, since evidence derived from the current study has great value for the study of living conditions, social structure, religious beliefs and ritual practices. (iii) Excavations brought to light burials of different categories. The graves date to the bronze and Iron ages. Their structure varies between mounds above the ground or burials below the ground level. Evidence comes from sites such as Al-Da’asa, Ras Abruk, and Al-Khor. Painted Ubaid sherds of Mesopotamian culture have been discovered in Qatar from sites such as Al-Da’asa, Ras Abruk, and Bir Zekrit. In conclusion, there is no comprehensive study which has been done and lack of general synthesis of information about funerary practices is problematic. Therefore, the study will fill in the gaps in the area.

Keywords: archaeological, burial, findings, prehistory, Qatar

Procedia PDF Downloads 112
205 Human Behavioral Assessment to Derive Land-Use for Sustenance of River in India

Authors: Juhi Sah

Abstract:

Habitat is characterized by the inter-dependency of environmental elements. Anthropocentric development approach is increasing our vulnerability towards natural hazards. Hence, manmade interventions should have a higher level of sensitivity towards the natural settings. Sensitivity towards the environment can be assessed by the behavior of the stakeholders involved. This led to the establishment of a hypothesis: there exists a legitimate relationship between the behavioral sciences, land use evolution and environment conservation, in the planning process. An attempt has been made to establish this relationship by reviewing the existing set of knowledge and case examples pertaining to the three disciplines under inquiry. Understanding the scarce & deteriorating nature of fresh-water reserves of earth and experimenting the above concept, a case study of a growing urban center's river flood plain is selected, in a developing economy, India. Cases of urban flooding in Chennai, Delhi and other mega cities of India, imposes a high risk on the unauthorized settlement, on the floodplains of the rivers. The issue addressed here is the encroachment of floodplains, through psychological enlightenment and modification through knowledge building. The reaction of an individual or society can be compared to a cognitive process. This study documents all the stakeholders' behavior and perception for their immediate natural environment (water body), and produce various land uses suitable along a river in an urban settlement as per different stakeholder's perceptions. To assess and induce morally responsible behavior in a community (small scale or large scale), tools of psychological inquiry is used for qualitative analysis. The analysis will deal with varied data sets from two sectors namely: River and its geology, Land use planning and regulation. Identification of a distinctive pattern in the built up growth, river ecology degradation, and human behavior, by handling large quantum of data from the diverse sector and comments on the availability of relevant data and its implications, has been done. Along the whole river stretch, condition and usage of its bank vary, hence stakeholder specific survey questionnaires have been prepared to accurately map the responses and habits of the rational inhabitants. A conceptual framework has been designed to move forward with the empirical analysis. The classical principle of virtues says "virtue of a human depends on its character" but another concept defines that the behavior or response is a derivative of situations and to bring about a behavioral change one needs to introduce a disruption in the situation/environment. Owing to the present trends, blindly following the results of data analytics and using it to construct policy, is not proving to be in favor of planned development and natural resource conservation. Thus behavioral assessment of the rational inhabitants of the planet is also required, as their activities and interests have a large impact on the earth's pre-set systems and its sustenance.

Keywords: behavioral assessment, flood plain encroachment, land use planning, river sustenance

Procedia PDF Downloads 91