Search results for: precast structures
116 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 105115 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 178114 DSF Elements in High-Rise Timber Buildings
Authors: Miroslav Premrov, Andrej Štrukelj, Erika Kozem Šilih
Abstract:
The utilization of prefabricated timber-wall elements with double glazing, called as double-skin façade element (DSF), represents an innovative structural approach in the context of new high-rise timber construction, simultaneously combining sustainable solutions with improved energy efficiency and living quality. In addition to the minimum energy needs of buildings, the design of modern buildings is also increasingly focused on the optimal indoor comfort, in particular on sufficient natural light indoors. An optimally energy-designed building with an optimal layout of glazed areas around the building envelope represents a great potential in modern timber construction. Usually, all these transparent façade elements, because of energy benefits, are primary asymmetrical oriented and if they are considered as non-resisting against a horizontal load impact, a strong torsion effects in the building can appear. The problem of structural stability against a strong horizontal load impact of such modern timber buildings especially increase in a case of high-rise structures where additional bracing elements have to be used. In such a case, special diagonal bracing systems or other bracing solutions with common timber wall elements have to be incorporated into the structure of the building to satisfy all prescribed resisting requirements given by the standards. However, all such structural solutions are usually not environmentally friendly and also not contribute to an improved living comfort, or they are not accepted by the architects at all. Consequently, it is a special need to develop innovative load-bearing timber-glass wall elements which are in the same time environmentally friendly, can increase internal comfort in the building, but are also load-bearing. The new developed load-bearing DSF elements can be a good answer on all these requirements. Timber-glass façade elements DSF wall elements consist of two transparent layers, thermal-insulated three-layered glass pane on the internal side and an additional single-layered glass pane on the external side of the wall. The both panes are separated by an air channel which can be of any dimensions and can have a significant influence on the thermal insulation or acoustic response of such a wall element. Most already published studies on DSF elements primarily deal only with energy and LCA solutions and do not address any structural problems. In previous studies according to experimental analysis and mathematical modeling it was already presented a possible benefit of such load-bearing DSF elements, especially comparing with previously developed load-bearing single-skin timber wall elements, but they were not applicate yet in any high-rise timber structure. Therefore, in the presented study specially selected 10-storey prefabricated timber building constructed in a cross-laminated timber (CLT) structural wall system is analyzed using the developed DSF elements in a sense to increase a structural lateral stability of the whole building. The results evidently highlight the importance the load-bearing DSF elements, as their incorporation can have a significant impact on the overall behavior of the structure through their influence on the stiffness properties. Taking these considerations into account is crucial to ensure compliance with seismic design codes and to improve the structural resilience of high-rise timber buildings.Keywords: glass, high-rise buildings, numerical analysis, timber
Procedia PDF Downloads 46113 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo
Authors: Changhan Ouyang, Zhonglin Xie
Abstract:
In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation
Procedia PDF Downloads 314112 Growth Patterns of Pyrite Crystals Studied by Electron Back Scatter Diffraction (EBSD)
Authors: Kirsten Techmer, Jan-Erik Rybak, Simon Rudolph
Abstract:
Natural formed pyrites (FeS2) are frequent sulfides in sedimentary and metamorphic rocks. Growth textures of idiomorphic pyrite assemblages reflect the conditions during their formation in the geologic sequence, furtheron the local texture analyses of the growth patterns of pyrite assemblages by EBSD reveal the possibility to resolve the growth conditions during the formation of pyrite at the micron scale. The spatial resolution of local texture measurements in the Scanning Electron Microscope used can be in the nanomete scale. Orientation contrasts resulting from domains of smaller misorientations within larger pyrite crystals can be resolved as well. The electron optical studies have been carried out in a Field-Emission Scanning Electron Microscope (FEI Quanta 200) equipped with a CCD camera to study the orientation contrasts along the surfaces of pyrite. Idiomorphic cubic single crystals of pyrite, polycrystalline assemblages of pyrite, spherically grown spheres of pyrite as well as pyrite-bearing ammonites have been studied by EBSD in the Scanning Electron Microscope. Samples were chosen to show no or minor secondary deformation and an idiomorphic 3D crystal habit, so the local textures of pyrite result mainly from growth and minor from deformation. The samples studied derived from Navajun (Spain), Chalchidiki (Greece), Thüringen (Germany) and Unterkliem (Austria). Chemical analyses by EDAX show pyrite with minor inhomogeneities e.g., single crystals of galena and chalcopyrite along the grain boundaries of larger pyrite crystals. Intergrowth between marcasite and pyrite can be detected in one sample. Pyrite may form intense growth twinning lamellae on {011}. Twinning, e.g., contact twinning is abundant within the crystals studied and the individual twinning lamellaes can be resolved by EBSD. The ammonites studied show a replacement of the shale by newly formed pyrite resulting in an intense intergrowth of calcite and pyrite. EBSD measurements indicate a polycrystalline microfabric of both minerals, still reflecting primary surface structures of the ammonites e.g, the Septen. Discs of pyrite (“pyrite dollar”) as well as pyrite framboids show growth patterns comprising a typical microfabric. EBSD studies reveal an equigranular matrix in the inner part of the discs of pyrite and a fiber growth with larger misorientations in the outer regions between the individual segments. This typical microfabric derived from a formation of pyrite crystals starting at a higher nucleation rate and followed by directional crystal growth. EBSD studies show, that the growth texture of pyrite in the samples studied reveals a correlation between nucleation rate and following growth rate of the pyrites, thus leading to the characteristic crystal habits. Preferential directional growth at lower nucleation rates may lead to the formation of 3D framboids of pyrite. Crystallographic misorientations between the individual fibers are similar. In ammonites studied, primary anisotropies of the substrates like e.g., ammonitic sutures, influence the nucleation, crystal growth and habit of the newly formed pyrites along the surfaces.Keywords: Electron Back Scatter Diffraction (EBSD), growth pattern, Fe-sulfides (pyrite), texture analyses
Procedia PDF Downloads 292111 Seismic History and Liquefaction Resistance: A Comparative Study of Sites in California
Authors: Tarek Abdoun, Waleed Elsekelly
Abstract:
Introduction: Liquefaction of soils during earthquakes can have significant consequences on the stability of structures and infrastructure. This study focuses on comparing two liquefaction case histories in California, namely the response of the Wildlife site in the Imperial Valley to the 2010 El-Mayor Cucapah earthquake (Mw = 7.2, amax = 0.15g) and the response of the Treasure Island Fire Station (F.S.) site in the San Francisco Bay area to the 1989 Loma Prieta Earthquake (Mw = 6.9, amax = 0.16g). Both case histories involve liquefiable layers of silty sand with non-plastic fines, similar shear wave velocities, low CPT cone penetration resistances, and groundwater tables at similar depths. The liquefaction charts based on shear wave velocity field predict liquefaction at both sites. However, a significant difference arises in their pore pressure responses during the earthquakes. The Wildlife site did not experience liquefaction, as evidenced by piezometer data, while the Treasure Island F.S. site did liquefy during the shaking. Objective: The primary objective of this study is to investigate and understand the reason for the contrasting pore pressure responses observed at the Wildlife site and the Treasure Island F.S. site despite their similar geological characteristics and predicted liquefaction potential. By conducting a detailed analysis of similarities and differences between the two case histories, the objective is to identify the factors that contributed to the higher liquefaction resistance exhibited by the Wildlife site. Methodology: To achieve this objective, the geological and seismic data available for both sites were gathered and analyzed. Then their soil profiles, seismic characteristics, and liquefaction potential as predicted by shear wave velocity-based liquefaction charts were analyzed. Furthermore, the seismic histories of both regions were examined. The number of previous earthquakes capable of generating significant excess pore pressures for each critical layer was assessed. This analysis involved estimating the total seismic activity that the Wildlife and Treasure Island F.S. critical layers experienced over time. In addition to historical data, centrifuge and large-scale experiments were conducted to explore the impact of prior seismic activity on liquefaction resistance. These findings served as supporting evidence for the investigation. Conclusions: The higher liquefaction resistance observed at the Wildlife site and other sites in the Imperial Valley can be attributed to preshaking by previous earthquakes. The Wildlife critical layer was subjected to a substantially greater number of seismic events capable of generating significant excess pore pressures over time compared to the Treasure Island F.S. layer. This crucial disparity arises from the difference in seismic activity between the two regions in the past century. In conclusion, this research sheds light on the complex interplay between geological characteristics, seismic history, and liquefaction behavior. It emphasizes the significant impact of past seismic activity on liquefaction resistance and can provide valuable insights for evaluating the stability of sandy sites in other seismic regions.Keywords: liquefaction, case histories, centrifuge, preshaking
Procedia PDF Downloads 75110 Operational Characteristics of the Road Surface Improvement
Authors: Iuri Salukvadze
Abstract:
Construction takes importance role in the history of mankind, there is not a single thing-product in our lives in which the builder’s work was not to be materialized, because to create all of it requires setting up factories, roads, and bridges, etc. The function of the Republic of Georgia, as part of the connecting Europe-Asia transport corridor, is significantly increased. In the context of transit function a large part of the cargo traffic belongs to motor transport, hence the improvement of motor roads transport infrastructure is rather important and rise the new, increased operational demands for existing as well as new motor roads. Construction of the durable road surface is related to rather large values, but because of high transport-operational properties, such as high-speed, less fuel consumption, less depreciation of tires, etc. If the traffic intensity is high, therefore the reimbursement of expenses occurs rapidly and accordingly is increasing income. If the traffic intensity is relatively small, it is recommended to use lightened structures of road carpet in order to pay for capital investments amounted to no more than normative one. The road carpet is divided into the following basic types: asphaltic concrete and cement concrete. Asphaltic concrete is the most perfect type of road carpet. It is arranged in two or three layers on rigid foundation and will be compacted. Asphaltic concrete is artificial building material, which due stratum will be selected and measured from stone skeleton and sand, interconnected by bitumen and a mixture of mineral powder. Less strictly selected similar material is called as bitumen-mineral mixture. Asphaltic concrete is non-rigid building material and well durable on vertical loadings; it is less resistant to the impact of horizontal forces. The cement concrete is monolithic and durable material, it is well durable the horizontal loads and is less resistant related to vertical loads. The cement concrete consists from strictly selected, measured stone material and sand, the binder is cement. The cement concrete road carpet represents separate slabs of sizes from 3 ÷ 5 op to 6 ÷ 8 meters. The slabs are reinforced by a rather complex system. Between the slabs are arranged seams that are designed for avoiding of additional stresses due temperature fluctuations on the length of slabs. For the joint behavior of separate slabs, they are connected by metal rods. Rods provide the changes in the length of slabs and distribute to the slab vertical forces and bending moments. The foundation layers will be extremely durable, for that is required high-quality stone material, cement, and metal. The qualification work aims to: in order for improvement of traffic conditions on motor roads to prolong operational conditions and improving their characteristics. The work consists from three chapters, 80 pages, 5 tables and 5 figures. In the work are stated general concepts as well as carried out by various companies using modern methods tests and their results. In the chapter III are stated carried by us tests related to this issue and specific examples to improving the operational characteristics.Keywords: asphalt, cement, cylindrikal sample of asphalt, building
Procedia PDF Downloads 223109 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 246108 Squaring the Triangle: A Stumpian Solution to the Major Frictions that Exist between Pragmatism, Religion, and Moral Progress; Richard Bernstein, Cornel West, and Hans-Georg Gadamer Re-Examined
Authors: Martin Bloomfield
Abstract:
This paper examines frictions that lie at the heart of any pragmatist conception of religion and moral progress. I take moral progress to require the ability to correctly analyse social problems, provide workable solutions to these problems, and then rationally justify the analyses and solutions used. I take religion here to involve, as a minimal requirement, belief in the existence of God, a god, or gods, such that they are recognisable to most informed observers within the Western tradition. I take pragmatism to belong to, and borrow from, the philosophical traditions of non-absolutism, anti-realism, historicism, and voluntarism. For clarity, the relevant brands of each of these traditions will be examined during the paper. The friction identified in the title may be summed up as follows: those who, like Cornel West (and, when he was alive, Hilary Putnam), are theistic pragmatists with an interest in realising moral progress, have all been aware of a problem inherent in their positions. Assuming it can be argued that religion and moral progress are compatible, a non-absolutist, anti-realist, historicist position nevertheless raises problems that, as Leon Wieseltier pointed out, the pragmatist still believes in a God who isn’t real, and that the truth of any religious statement (including “God exists”) is relative not to any objective reality but to communities of engaged interlocutors; and that, where there are no absolute standards of right and wrong, any analysis of (and solution to) social problems can only be rationally justified relative to one or another community or moral and epistemic framework. Attempts made to universalise these frameworks, notably by Dewey, Gadamer, and Bernstein, through democracy and hermeneutics, fall into either a vicious and infinite regress, or (taking inspiration from Habermas) the problem of moral truths being decided through structures of power. The paper removes this friction by highlighting the work of Christian pragmatist Cornel West through the lens of the philosopher of religion Eleanore Stump. While West recognises that for the pragmatist, the correctness of any propositions about God or moral progress is impossible to rationally justify to any outside the religious, moral or epistemic framework of the speakers themselves without, as he calls it, a ‘locus of truth’ (which is itself free from the difficulties Dewey, Gadamer and Bernstein fall victim to), Stump identifies routes to knowledge which provide such a locus while avoiding the problems of relativism, power dynamics, and regress. She describes “Dominican” and “Franciscan” knowledge (roughly characterised as “propositional” and “non-propositional”), and uses this distinction to identify something Bernstein saw as missing from Gadamer: culture-independent norms, upon which universal agreement can be built. The “Franciscan knowledge” Stump identifies as key is second-personal knowledge of Christ. For West, this allows the knower to access vital culture-independent norms. If correct, instead of the classical view (religion is incompatible with pragmatism), Christianity becomes key to pragmatist knowledge and moral-knowledge claims. Rather than being undermined by pragmatism, Christianity enables pragmatists to make moral and epistemic claims, free from troubling power dynamics and cultural relativism.Keywords: Cornel West, Cultural Relativism, Gadamer, Philosophy of Religion, Pragmatism
Procedia PDF Downloads 197107 Functional Outcome of Speech, Voice and Swallowing Following Excision of Glomus Jugulare Tumor
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Glomus jugulare tumors arise within the jugular foramen and are commonly seen in females particularly on the left side. Surgical excision of the tumor may cause lower cranial nerve deficits. Cranial nerve involvement produces hoarseness of voice, slurred speech, and dysphagia along with other physical symptoms, thereby affecting the quality of life of individuals. Though oncological clearance is mainly emphasized on while treating these individuals, little importance is given to their communication, voice and swallowing problems, which play a crucial part in daily functioning. Objective: To examine the functions of voice, speech and swallowing outcomes of the subjects, following excision of glomus jugulare tumor. Methods: Two female subjects aged 56 and 62 years had come with a complaint of change in voice, inability to swallow and reduced clarity of speech following surgery for left glomus jugulare tumor were participants of the study. Their surgical information revealed multiple cranial nerve palsies involving the left facial, left superior and recurrent branches of the vagus nerve, left pharyngeal, left soft palate, left hypoglossal and vestibular nerves. Functional outcomes of voice, speech and swallowing were evaluated by perceptual and objective assessment procedures. Assessment included the examination of oral structures and functions, dysarthria by Frenchey dysarthria assessment, cranial nerve functions and swallowing functions. MDVP and Dr. Speech software were used to evaluate acoustic parameters of voice and quality of voice respectively. Results: The study revealed that both the subjects, subsequent to excision of glomus jugulare tumor, showed a varied picture of affected oral structure and functions, articulation, voice and swallowing functions. The cranial nerve assessment showed impairment of the vagus, hypoglossal, facial and glossopharyngeal nerves. Voice examination indicated vocal cord paralysis associated with breathy quality of voice, weak voluntary cough, reduced pitch and loudness range, and poor respiratory support. Perturbation parameters as jitter, shimmer were affected along with s/z ratio indicative of voice fold pathology. Reduced MPD(Maximum Phonation Duration) of vowels indicated that disturbed coordination between respiratory and laryngeal systems. Hypernasality was found to be a prominent feature which reduced speech intelligibility. Imprecise articulation was seen in both the subjects as the hypoglossal nerve was affected following surgery. Injury to vagus, hypoglossal, gloss pharyngeal and facial nerves disturbed the function of swallowing. All the phases of swallow were affected. Aspiration was observed before and during the swallow, confirming the oropharyngeal dysphagia. All the subsystems were affected as per Frenchey Dysarthria Assessment signifying the diagnosis of flaccid dysarthria. Conclusion: There is an observable communication and swallowing difficulty seen following excision of glomus jugulare tumor. Even with complete resection, extensive rehabilitation may be necessary due to significant lower cranial nerve dysfunction. The finding of the present study stresses the need for involvement of as speech and swallowing therapist for pre-operative counseling and assessment of functional outcomes.Keywords: functional outcome, glomus jugulare tumor excision, multiple cranial nerve impairment, speech and swallowing
Procedia PDF Downloads 252106 International Collaboration: Developing the Practice of Social Work Curriculum through Study Abroad and Participatory Research
Authors: Megan Lindsey
Abstract:
Background: Globalization presents international social work with both opportunities and challenges. Thus, the design of this international experience aligns with the three charges of the Commission on Global Social Work Education. An international collaborative effort between an American and Scottish University Social Work Program was based on an established University agreement. The presentation provides an overview of an international study abroad among American and Scottish Social Work students. Further, presenters will discuss the opportunities of international collaboration and the challenges of the project. First, we will discuss the process of a successful international collaboration. This discussion will include the planning, collaboration, execution of the experience, along with its application to the international field of social work. Second, we will discuss the development and implementation of participatory action research in which the student engage to enhance their learning experience. A collaborative qualitative research project was undertaken with three goals. First, students gained experience in Scottish social services, including agency visits and presentations. Second, a collaboration between American and Scottish MSW Students allowed the exchange of ideas and knowledge about services and social work education. Third, students collaborated on a qualitative research method to reflect on their social work education and the formation of their professional identity. Methods/Methodology: American and Scottish students engaged in participatory action research by using Photovoice methods while studying together in Scotland. The collaboration between faculty researchers framed a series of research questions. Both universities obtained IRB approval and trained students in Photovoice methods. The student teams used the research question and Photovoice method to discover images that represented their professional identity formation. Two Photovoice goals grounded the study's research question. First, the methods enabled the individual students to record and reflect on their professional strengths and concerns. Second, student teams promoted critical dialogue and knowledge about personal and professional issues through large and small group discussions of photographs. Results: The international participatory approach generated the ability for students to contextualize their common social work education and practice experiences. Team discussions between representatives of each country resulted in understanding professional identity formation and the processes of social work education that contribute to that identity. Students presented the photograph narration of their knowledge and understanding of international social work education and practice. Researchers then collaborated on finding common themes. The results found commonalities in the quality and depth of social work education. The themes found differences regarding how professional identity is formed. Students found great differences between their and American accreditation and certification. Conclusions: Faculty researchers’ collaboration themes sought to categorize the students’ experiences of their professional identity. While the social work education systems are similar, there are vast differences. The Scottish themes noted structures within American social work not found in the United Kingdom. The American researchers noted that Scotland, as does the United Kingdom, relies on programs, agencies, and the individual social worker to provide structure to identity formation. Other themes will be presented.Keywords: higher education curriculum, international collaboration, social sciences, action research
Procedia PDF Downloads 126105 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 345104 Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study
Authors: K. Rhea Devaiah, B. S. Premalatha
Abstract:
Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL.Keywords: oral cancer, speech and swallowing therapy, speech intelligibility, trismus, quality of life
Procedia PDF Downloads 112103 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules
Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel
Abstract:
The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling
Procedia PDF Downloads 131102 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein
Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel
Abstract:
γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design
Procedia PDF Downloads 268101 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components
Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia
Abstract:
Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses
Procedia PDF Downloads 54100 Use of Zikani’s Ribosome Modulating Agents for Treating Recessive Dystrophic & Junctional Epidermolysis Bullosa with Nonsense Mutations
Authors: Mei Chen, Yingping Hou, Michelle Hao, Soheil Aghamohammadzadeh, Esteban Terzo, Roger Clark, Vijay Modur
Abstract:
Background: Recessive Dystrophic Epidermolysis Bullosa (RDEB) is a genetic skin condition characterized by skin tearing and unremitting blistering upon minimal trauma. Repeated blistering, fibrosis, and scarring lead to aggressive squamous cell carcinoma later in life. RDEB is caused by mutations in the COL7A1 gene encoding collagen type VII (C7), the major component of anchoring fibrils mediating epidermis-dermis adherence. Nonsense mutations in the COL7A1 gene of a subset of RDEB patients leads to premature termination codons (PTC). Similarly, most Junctional Epidermolysis Bullosa (JEB) cases are caused by nonsense mutations in the LAMB3 gene encoding the β3 subunit of laminin 332. Currently, there is an unmet need for the treatment of RDEB and JEB. Zikani Therapeutics has discovered an array of macrocyclic compounds with ring structures similar to macrolide antibiotics that can facilitate readthrough activity of nonsense mutations in the COL7A1 and LAMB3 genes by acting as Ribosome Modulating Agents (RMAs). The medicinal chemistry synthetic advancements of these macrocyclic compounds have allowed targeting the human ribosome while preserving the structural elements responsible for the safety and pharmacokinetic profile of clinically used macrolide antibiotics. Methods: C7 expression was used as a measure of readthrough activity by immunoblot assays in two primary human fibroblasts from RDEB patients (R578X/R578X and R163X/R1683X-COL7A1). Similarly, immunoblot assays in C325X/c.629-12T > A-LAMB3 keratinocytes were used to measure readthrough activity for JEB. The relative readthrough activity of each compound was measured relative to Gentamicin. An imaging-based fibroblast migration assay was used as an assessment of C7 functionality in RDEB-fibroblasts over 16-20 hrs. The incubation period for the above experiments was 48 hrs for RDEB fibroblasts and 72 hours for JEB keratinocytes. Results: 9 RMAs demonstrated increased protein expression in both patient RDEB fibroblasts. The highest readthrough activity at tested concentrations without cytotoxicities increased protein expression up to 179% of Gentamicin (400 µg/ml), with favored readthrough activity in R163X/R1683X-COL7A1 fibroblasts. Concurrent with protein expression, fibroblast hypermotility phenotype observed in RDEB was rescued by reducing motility by ~35% to WT levels (the same level as 690 µM Gentamicin treated cells). Laminin β3 expression was also shown to be increased by 6 RMAs in keratinocytes to 33-83% of (400 µg/ml) Gentamicin. Conclusions: To date, 9 RMAs have been identified that enhance the expression of functional C7 in a mutation-dependent manner in two different RDEB patient fibroblast backgrounds (R578X/R578X and R163X/R1683X-COL7A1). A further 6 RMAs have been identified that enhance the readthrough of C325X-LAMB3 in JEB patient keratinocytes. Based on the clinical trial conducted by us with topical gentamycin in 2017, Zikani’s RMAs achieve clinically significant levels of read-through for the treatment of recessive dystrophic and Junctional Epidermolysis Bullosa.Keywords: epidermolysis bullosa, nonsense mutation, readthrough, ribosome modulation
Procedia PDF Downloads 9899 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 13498 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage
Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti
Abstract:
Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage
Procedia PDF Downloads 16097 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience
Authors: Eugenia Vomvoridi-Ivanovic
Abstract:
It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students
Procedia PDF Downloads 13096 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 12495 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using Salt City as a Case Study
Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz
Abstract:
Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage
Procedia PDF Downloads 7094 Construction Engineering and Cocoa Agriculture: A Synergistic Approach for Improved Livelihoods of Farmers
Authors: Felix Darko-Amoah, Daniel Acquah
Abstract:
In contemporary ecosystems for developing countries like Ghana, the need to explore innovative solutions for sustainable livelihoods of farmers is more important than ever. With Ghana’s population growing steadily and the demand for food, fiber and shelter increasing, it is imperative that the construction industry and agriculture come together to address the challenges faced by farmers in the country. In order to enhance the livelihoods of cocoa farmers in Ghana, this paper provides an innovative strategy that aims to integrate the areas of civil engineering and cash crop agriculture. This study focuses on cocoa cultivation in poorer nations, where farmers confront a variety of difficulties include restricted access to financing, subpar infrastructure, and insufficient support services. We seek to improve farmers' access to financing, improve infrastructure, and provide support services that are essential to their success by combining the fields of building engineering and cocoa production. The findings of the study are beneficial to cocoa producers, community extension agents, and construction engineers. In order to accomplish our objectives, we conducted 307 of field investigations in particular cocoa growing communities in the Western Region of Ghana. Several studies have shown that there is a lack of adequate infrastructure and financing, leading to low yields, subpar beans, and low farmer profitability in developing nations like Ghana. Our goal is to give farmers access to better infrastructure, better financing, and support services that are crucial to their success through the fusion of construction engineering and cocoa production. Based on data gathered from the field investigations, the results show that the employment of appropriate technology and methods for developing structures, roads, and other infrastructure in rural regions is one of the essential components of this strategy. For instance, we find that using affordable, environmentally friendly materials like bamboo, rammed earth, and mud bricks can assist to cut expenditures while also protecting the environment. By applying simple relational techniques to the data gathered, the results also show that construction engineers are crucial in planning and building infrastructure that is appropriate for the local environment and circumstances and resilient to natural disasters like floods. Thus, the convergence of construction engineering and cash crop cultivation is another crucial component of the agriculture-construction interplay. For instance, farmers can receive financial assistance to buy essential inputs, such as seeds, fertilizer, and tools, as well as training in proper farming methods. Moreover, extension services can be offered to assist farmers in marketing their crops and enhancing their livelihoods and revenue. In conclusion, our analysis of responses from the 307 participants depicts that the combination of construction engineering and cash crop agriculture offers an innovative approach to improving farmers' livelihoods in cocoa farming communities in Ghana. In conclusion, by inculcating the findings of this study into core decision-making, policymakers can help farmers build sustainable and profitable livelihoods by addressing challenges such as limited access to financing, poor infrastructure, and inadequate support services.Keywords: cocoa agriculture, construction engineering, farm buildings and equipment, improved livelihoods of farmers
Procedia PDF Downloads 9093 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 27092 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 14491 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 28090 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters
Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini
Abstract:
The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.Keywords: curcumin, HSPs, prediction, solvates, solubility
Procedia PDF Downloads 6389 A Scoping Review of Technology-Facilitated Gender-Based Violence: Findings from Asia
Authors: Vaiddehi Bansal, Laura Hinson, Mayumi Rezwan, Erin Leasure, Mithila Iyer, Connor Roth, Poulomi Pal, Kareem Kysia
Abstract:
As digital usage becomes increasingly ubiquitous worldwide, technology-facilitated gender-based violence (GBV) has garnered increasing attention in the recent years, especially during the COVID-19 pandemic. This form of violence is defined as “action by one or more people that harms others based on their sexual or gender identity or by enforcing harmful gender norms. This action is carried out using the internet and/or mobile technology that harms others based on their sexual or gender identity or by enforcing harmful gender norms”.Common forms of technology-facilitated GBV include cyberstalking, cyberbullying, sexual harassment, image-based abuse, doxing, hacking, gendertrolling, hate speech, and impersonation. Most literature on this pervasive yet complex issue has emerged from high-income countries, and few studies comprehensively summarize its prevalence, manifestations, and implications. This rigorous scoping review examines the evidence base of this phenomenon in low and middle-income countries across Asia, summarizing trends and gaps to inform actionable recommendations. The research team developed search terms to conduct a comprehensive search of peer-reviewed and grey literature. Query results were eligible for inclusion if they were published in English between 2006-2021 and with an explicit emphasis on technology-facilitated violence, gender, and the countries of interest in the Asia region. Title, abstracts, and full-texts were independently screened by two reviewers based on inclusion criteria, and data was extracted through deductive coding. Of 2,042 articles screened, 97 met inclusion criteria. The review revealed a gap in the evidence-base in Central Asia and the Pacific Islands. Findings across South and Southeast Asia indicate that technology-facilitated GBV comprises various forms of abuse, violence, and harassment that are largely shaped by country-specific societal norms and technological landscapes. The literature confirms that women, girls, and sexual minorities, especially those with intersecting marginalized identities, are often more vulnerable to experiencing online violence. Cultural norms and patriarchal structures tend to stigmatize survivors, limiting their ability to seek social and legal support. Survivors are also less likely to report their experience due to barriers such as lack of awareness of reporting mechanisms, the perception that digital platforms will not address their complaints, and cumbersome reporting systems. The COVID-19 pandemic has further exacerbated perpetration and strained support mechanisms. Prevalence varies by the form of violence but is difficult to estimate accurately due to underreporting and disjointed, outdated, or non-existent legal definitions. Addressing technology-facilitated GBV in Asia requires collective action from multiple actors, including government authorities, technology companies, digital and feminist movements, NGOs, and researchers.Keywords: gender-based violence, technology, online sexual harassment, image-based abuse
Procedia PDF Downloads 13288 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 14587 Signature Bridge Design for the Port of Montreal
Authors: Juan Manuel Macia
Abstract:
The Montreal Port Authority (MPA) wanted to build a new road link via Souligny Avenue to increase the fluidity of goods transported by truck in the Viau Street area of Montreal and to mitigate the current traffic problems on Notre-Dame Street. With the purpose of having a better integration and acceptance of this project with the neighboring residential surroundings, this project needed to include an architectural integration, bringing some artistic components to the bridge design along with some landscaping components. The MPA is required primarily to provide direct truck access to Port of Montreal with a direct connection to the future Assomption Boulevard planned by the City of Montreal and, thus, direct access to Souligny Avenue. The MPA also required other key aspects to be considered for the proposal and development of the project, such as the layout of road and rail configurations, the reconstruction of underground structures, the relocation of power lines, the installation of lighting systems, the traffic signage and communication systems improvement, the construction of new access ramps, the pavement reconstruction and a summary assessment of the structural capacity of an existing service tunnel. The identification of the various possible scenarios began by identifying all the constraints related to the numerous infrastructures located in the area of the future link between the port and the future extension of Souligny Avenue, involving interaction with several disciplines and technical specialties. Several viaduct- and tunnel-type geometries were studied to link the port road to the right-of-way north of Notre-Dame Street and to improve traffic flow at the railway corridor. The proposed design took into account the existing access points to Port of Montreal, the built environment of the MPA site, the provincial and municipal rights-of-way, and the future Notre-Dame Street layout planned by the City of Montreal. These considerations required the installation of an engineering structure with a span of over 60 m to free up a corridor for the future urban fabric of Notre-Dame Street. The best option for crossing this span length was identified by the design and construction of a curved bridge over Notre-Dame Street, which is essentially a structure with a deck formed by a reinforced concrete slab on steel box girders with a single span of 63.5m. The foundation units were defined as pier-cap type abutments on drilled shafts to bedrock with rock sockets, with MSE-type walls at the approaches. The configuration of a single-span curved structure posed significant design and construction challenges, considering the major constraints of the project site, a design for durability approach, and the need to guarantee optimum performance over a 75-year service life in accordance with the client's needs and the recommendations and requirements defined by the standards used for the project. These aspects and the need to include architectural and artistic components in this project made it possible to design, build, and integrate a signature infrastructure project with a sustainable approach, from which the MPA, the commuters, and the city of Montreal and its residents will benefit.Keywords: curved bridge, steel box girder, medium span, simply supported, industrial and urban environment, architectural integration, design for durability
Procedia PDF Downloads 70